|
NAMEDBM::Deep::Internals - Out of date documentation on DBM::Deep internalsOUT OF DATEThis document is out-of-date. It describes an intermediate file format used during the development from 0.983 to 1.0000. It will be rewritten soon.So far, the description of the header format has been updated. DESCRIPTIONThis is a document describing the internal workings of DBM::Deep. It is not necessary to read this document if you only intend to be a user. This document is intended for people who either want a deeper understanding of specifics of how DBM::Deep works or who wish to help program DBM::Deep.CLASS LAYOUTDBM::Deep is broken up into five classes in three inheritance hierarchies.
FILE LAYOUTThis describes the 1.0003 and 2.0000 formats, which internally are numbered 3 and 4, respectively. The internal numbers are used in this section. These two formats are almost identical.DBM::Deep uses a tagged file layout. Every section has a tag, a size, and then the data. File headerThe file header consists of two parts. The first part is a fixed length of 13 bytes:DPDB h VVVV SSSS \ / | \ \ \/ '---. \ '--- size of the second part of the header file \ '--- version signature tag
The second part of the header is as follows: S B S T T(TTTTTTTTT...) (SS SS SS SS ...) (continued...) | | | | \ | | | | '----------. \ staleness counters | | '--------. \ txn bitfield | '------. \ number of transactions byte size \ data sector size max buckets (continuation...) BB(BBBBBB) DD(DDDDDD) II(IIIIII) | | | | free data | free blist free index
IndexThe Index parts can be tagged either as Hash, Array, or Index. The latter is if there was a reindexing due to a bucketlist growing too large. The others are the root index for their respective datatypes. The index consists of a tag, a size, and then 256 sections containing file locations. Each section corresponds to each value representable in a byte.The index is used as follows - whenever a hashed key is being looked up, the first byte is used to determine which location to go to from the root index. Then, if that's also an index, the second byte is used, and so forth until a bucketlist is found. BucketlistThis is the part that contains the link to the data section. A bucketlist defaults to being 16 buckets long (modifiable by the max_buckets parameter used when creating a new file). Each bucket contains an MD5 and a location of the appropriate key section.Key areaThis is the part that handles transactional awareness. There are max_buckets sections. Each section contains the location to the data section, a transaction ID, and whether that transaction considers this key to be deleted or not.Data areaThis is the part that actual stores the key, value, and class (if appropriate). The layout is:
The key is stored after the value because the value is requested more often than the key. PERFORMANCEDBM::Deep is written completely in Perl. It also is a multi-process DBM that uses the datafile as a method of synchronizing between multiple processes. This is unlike most RDBMSes like MySQL and Oracle. Furthermore, unlike all RDBMSes, DBM::Deep stores both the data and the structure of that data as it would appear in a Perl program.CPUDBM::Deep attempts to be CPU-light. As it stores all the data on disk, DBM::Deep is I/O-bound, not CPU-bound.RAMDBM::Deep uses extremely little RAM relative to the amount of data you can access. You can iterate through a million keys (using "each()") without increasing your memory usage at all.DISKDBM::Deep is I/O-bound, pure and simple. The faster your disk, the faster DBM::Deep will be. Currently, when performing "my $x = $db->{foo}", there are a minimum of 4 seeks and 1332 + N bytes read (where N is the length of your data). (All values assume a medium filesize.) The actions taken are:
Every additional level of indexing (if there are enough keys) requires an additional seek and the reading of 1029 additional bytes. If the value is blessed, an additional 1 seek and 9 + M bytes are read (where M is the length of the classname). Arrays are (currently) even worse because they're considered "funny hashes" with the length stored as just another key. This means that if you do any sort of lookup with a negative index, this entire process is performed twice - once for the length and once for the value. ACTUAL TESTSSPEEDObviously, DBM::Deep isn't going to be as fast as some C-based DBMs, such as the almighty BerkeleyDB. But it makes up for it in features like true multi-level hash/array support, and cross-platform FTPable files. Even so, DBM::Deep is still pretty fast, and the speed stays fairly consistent, even with huge databases. Here is some test data:Adding 1,000,000 keys to new DB file... At 100 keys, avg. speed is 2,703 keys/sec At 200 keys, avg. speed is 2,642 keys/sec At 300 keys, avg. speed is 2,598 keys/sec At 400 keys, avg. speed is 2,578 keys/sec At 500 keys, avg. speed is 2,722 keys/sec At 600 keys, avg. speed is 2,628 keys/sec At 700 keys, avg. speed is 2,700 keys/sec At 800 keys, avg. speed is 2,607 keys/sec At 900 keys, avg. speed is 2,190 keys/sec At 1,000 keys, avg. speed is 2,570 keys/sec At 2,000 keys, avg. speed is 2,417 keys/sec At 3,000 keys, avg. speed is 1,982 keys/sec At 4,000 keys, avg. speed is 1,568 keys/sec At 5,000 keys, avg. speed is 1,533 keys/sec At 6,000 keys, avg. speed is 1,787 keys/sec At 7,000 keys, avg. speed is 1,977 keys/sec At 8,000 keys, avg. speed is 2,028 keys/sec At 9,000 keys, avg. speed is 2,077 keys/sec At 10,000 keys, avg. speed is 2,031 keys/sec At 20,000 keys, avg. speed is 1,970 keys/sec At 30,000 keys, avg. speed is 2,050 keys/sec At 40,000 keys, avg. speed is 2,073 keys/sec At 50,000 keys, avg. speed is 1,973 keys/sec At 60,000 keys, avg. speed is 1,914 keys/sec At 70,000 keys, avg. speed is 2,091 keys/sec At 80,000 keys, avg. speed is 2,103 keys/sec At 90,000 keys, avg. speed is 1,886 keys/sec At 100,000 keys, avg. speed is 1,970 keys/sec At 200,000 keys, avg. speed is 2,053 keys/sec At 300,000 keys, avg. speed is 1,697 keys/sec At 400,000 keys, avg. speed is 1,838 keys/sec At 500,000 keys, avg. speed is 1,941 keys/sec At 600,000 keys, avg. speed is 1,930 keys/sec At 700,000 keys, avg. speed is 1,735 keys/sec At 800,000 keys, avg. speed is 1,795 keys/sec At 900,000 keys, avg. speed is 1,221 keys/sec At 1,000,000 keys, avg. speed is 1,077 keys/sec This test was performed on a PowerMac G4 1gHz running Mac OS X 10.3.2 & Perl 5.8.1, with an 80GB Ultra ATA/100 HD spinning at 7200RPM. The hash keys and values were between 6 - 12 chars in length. The DB file ended up at 210MB. Run time was 12 min 3 sec. MEMORY USAGEOne of the great things about DBM::Deep is that it uses very little memory. Even with huge databases (1,000,000+ keys) you will not see much increased memory on your process. DBM::Deep relies solely on the filesystem for storing and fetching data. Here is output from top before even opening a database handle:PID USER PRI NI SIZE RSS SHARE STAT %CPU %MEM TIME COMMAND 22831 root 11 0 2716 2716 1296 R 0.0 0.2 0:07 perl Basically the process is taking 2,716K of memory. And here is the same process after storing and fetching 1,000,000 keys: PID USER PRI NI SIZE RSS SHARE STAT %CPU %MEM TIME COMMAND 22831 root 14 0 2772 2772 1328 R 0.0 0.2 13:32 perl Notice the memory usage increased by only 56K. Test was performed on a 700mHz x86 box running Linux RedHat 7.2 & Perl 5.6.1.
Visit the GSP FreeBSD Man Page Interface. |