HOWTO: Sample Functions Demonstrating GetChunk and AppendChunk
ID: Q194975
|
The information in this article applies to:
-
ActiveX Data Objects (ADO), versions 1.5, 2.0, 2.1 SP2
-
Microsoft Visual Basic Professional and Enterprise Editions for Windows, version 6.0
SUMMARY
This article describes reading and writing Binary Large Objects (BLOBs)
using GetChunk and AppendChunk methods against fields in ADO. It also
includes sample code using the NWIND sample database.
MORE INFORMATION
The GetChunk and AppendChunk methods work with the LongVarChar, LongVarWChar, and
LongVarBinary column types, also known as TEXT, NTEXT, and IMAGE columns, in
Microsoft SQL Server, and as MEMO and OLE fields in Microsoft Jet
databases. You can identify these columns in ADO by testing the Type
property of a Field for the values adLongVarChar, adLongVarWChar, and adLongVarBinary. You
can also test the Attributes property of a Field for the adFldLong flag:
If fld.Attributes And adFldLong Then
' You can use GetChunk/AppendChunk
Long columns are commonly referred to as BLOBs (Binary Large OBjects) even
though they may contain text data. The sample code below provides two
routines, BlobToFile and FileToBlob.
BlobToFile
BlobToFile determines the data type of the field and which of three methods
to use to write the BLOB data to a disk file. If the BLOB data is small
enough, it will reference the field value in its entirety without calling
GetChunk. If the BLOB size is unknown, it will call WriteFromUnsizedBinary
or WriteFromUnsizedText to write the data. This is less efficient in terms
of making extra copies of the data in local memory than the WriteFromBinary
and WriteFromText routines that are used when the size of the BLOB data is
known:
BlobToFile Calls one of the below routines to use GetChunk
WriteFromBinary Writes a LongVarBinary of known size to disk
WriteFromUnsizedBinary Writes a LongVarBinary on unknown size
WriteFromText Writes a LongVarChar of known size
WriteFromUnsizedText Writes a LongVarChar of unknown size
FileToBlob
FileToBlob determines whether to use AppendChunk or directly assign the
data to the BLOB field based on the size of the file. Because the size of
the file can always be determined, there are no "Unsized" routines as there
are in the BlobToFile sample code:
FileToBlob Calls one of the below routines to use AppendChunk
ReadToBinary Reads a file into a LongVarBinary column
ReadToText Reads a file into a LongVarChar column
Example
The sample code for BlobToFile and FileToBlob is stored in a Module, while
the test code is behind the default form. The test code uses each of the
three methods to save to disk the Photo (IMAGE/OLE/LongVarBinary) and Notes
(TEXT/MEMO/LongVarChar/LongVarWChar) fields for Andrew Fuller from the Employees table
of the NWIND database. It then reads the files back in and creates six new
records, reading each of the three sets of files via the two different read
methods.
NOTE: Using ADO 2.1 and later, you might see the following error on the line of code:
Data = fld.GetChunk(BLOCK_SIZE) :
Run-time error '94':
Invalid use of Null
ADO 2.1 and later might report the ActualSize property of a Text type BLOB field
as twice the number of characters. This is correct if the BLOB field contains Unicode
text, because Unicode uses 2 bytes per character. This is incorrect if the BLOB field
contains ANSI text, which uses 1-byte per character. If the ActualSize is twice the
length of an ANSI field, then GetChunk eventually attempts to get past
the end of the field.
You can see this behavior in the following scenarios, and might see it in other scenarios:
Scenario 1:
With a SQL Server 7 NTEXT field (ANSI text), using either the OLE DB Provider for SQL Server or the ODBC Provider with the SQL Server ODBC driver.
Note that using SQL Server TEXT field (Unicode) works with both providers.
Scenario 2:
With Access 97 MEMO fields and with Access 2000 MEMO fields with or without
Unicode compression, using either the OLE DB Provider for Jet 4.0 or the ODBC Provider with the Jet ODBC 4.0 driver, ODBCJT32.DLL.
Note that, with Access 97 MEMO fields, both the OLE DB Provider for Jet 3.51
and the ODBC Provider with the Jet ODBC 3.51 driver work correctly.
There are several possible workarounds for Run-time error '94':
- Use rs.Fieldname.ActualSize \ 2 instead of rs.Fieldname.ActualSize.
This resolves each of the specific scenarios listed above.
- Use method 2 or 3 below, neither of which rely on the ActualSize property.
Preparing the Data
- In Microsoft Access or other tool, open NWIND.MDB.
- Open the Employees table (or form) and locate "Andrew Fuller."
- Paste the contents of a large text file (between 30000 and 60000 bytes)
into the Notes field.
- Save the changes and exit Access.
- Add an ODBC datasource that points to the NWIND.MDB file.
Sample Code
- Create a new Visual Basic project and from the Project menu, select References, and select Microsoft ActiveX Data Objects Library or Microsoft ActiveX Data Objects Library.
- Add two CommandButtons (cmdSave and cmdLoad) to the default form(Form1).
- Add the following code. You will have to change the connect string
supplied on the "cn.Open" line:
Option Explicit
Private Sub CmdSave_Click()
Dim cn As ADODB.Connection, rs As ADODB.Recordset, SQL As String
Set cn = New ADODB.Connection
Set rs = New ADODB.Recordset
cn.CursorLocation = adUseServer
cn.Open "dsn=nwind_jet" ' *** change this ***
SQL = "SELECT * FROM Employees WHERE LastName='Fuller'"
rs.Open SQL, cn, adOpenStatic, adLockReadOnly
'
' Save using GetChunk and known size.
' FieldSize (ActualSize) > Threshold arg (16384)
'
BlobToFile rs!Photo, "c:\photo1.dat", rs!Photo.ActualSize, 16384
BlobToFile rs!Notes, "c:\notes1.txt", rs!Notes.ActualSize, 16384
' Uncomment the next line of code, and comment the line above,
' to workaround Runtime error '94': Invalid use of Null
' BlobToFile rs!Notes, "c:\notes1.txt", rs!Notes.ActualSize \ 2, 16384
'
' Save using GetChunk and unknown size.
' FieldSize not specified.
'
BlobToFile rs!Photo, "c:\photo2.dat"
BlobToFile rs!Notes, "c:\notes2.txt"
'
' Save without using GetChunk
' FieldSize (ActualSize) < Threshold arg (defaults to 1Mb)
'
BlobToFile rs!Photo, "c:\photo3.dat", rs!Photo.ActualSize
BlobToFile rs!Notes, "c:\notes3.txt", rs!Notes.ActualSize
' Uncomment the next line of code, and comment the line above,
' to workaround Runtime error '94': Invalid use of Null
' BlobToFile rs!Notes, "c:\notes3.txt", rs!Notes.ActualSize \ 2
rs.Close
cn.Close
End Sub
Private Sub CmdLoad_Click()
Dim cn As ADODB.Connection, rs As ADODB.Recordset, SQL As String
Set cn = New ADODB.Connection
Set rs = New ADODB.Recordset
cn.CursorLocation = adUseServer
cn.Open "dsn=ole_db_nwind_jet"
SQL = "SELECT * FROM Employees"
rs.Open SQL, cn, adOpenKeyset, adLockOptimistic
'
' Load using AppendChunk
'
rs.AddNew
rs!FirstName = "Test"
rs!LastName = "Fuller11"
FileToBlob "c:\photo1.dat", rs!Photo, 16384
FileToBlob "c:\notes1.txt", rs!Notes, 16384
rs.Update
rs.AddNew
rs!FirstName = "Test"
rs!LastName = "Fuller21"
FileToBlob "c:\photo2.dat", rs!Photo, 16384
FileToBlob "c:\notes2.txt", rs!Notes, 16384
rs.Update
rs.AddNew
rs!FirstName = "Test"
rs!LastName = "Fuller31"
FileToBlob "c:\photo3.dat", rs!Photo, 16384
FileToBlob "c:\notes3.txt", rs!Notes, 16384
rs.Update
'
' Load without using AppendChunk
'
rs.AddNew
rs!FirstName = "Test"
rs!LastName = "Fuller12"
FileToBlob "c:\photo1.dat", rs!Photo
FileToBlob "c:\notes1.txt", rs!Notes
rs.Update
rs.AddNew
rs!FirstName = "Test"
rs!LastName = "Fuller22"
FileToBlob "c:\photo2.dat", rs!Photo
FileToBlob "c:\notes2.txt", rs!Notes
rs.Update
rs.AddNew
rs!FirstName = "Test"
rs!LastName = "Fuller32"
FileToBlob "c:\photo3.dat", rs!Photo
FileToBlob "c:\notes3.txt", rs!Notes
rs.Update
rs.Close
cn.Close
End Sub
- Add a new module to the project (Module1) with the following code:
Option Explicit
Const BLOCK_SIZE = 16384
Sub BlobToFile(fld As ADODB.Field, ByVal FName As String, _
Optional FieldSize As Long = -1, _
Optional Threshold As Long = 1048576)
'
' Assumes file does not exist
' Data cannot exceed approx. 2Gb in size
'
Dim F As Long, bData() As Byte, sData As String
F = FreeFile
Open FName For Binary As #F
Select Case fld.Type
Case adLongVarBinary
If FieldSize = -1 Then ' blob field is of unknown size
WriteFromUnsizedBinary F, fld
Else ' blob field is of known size
If FieldSize > Threshold Then ' very large actual data
WriteFromBinary F, fld, FieldSize
Else ' smallish actual data
bData = fld.Value
Put #F, , bData ' PUT tacks on overhead if use fld.Value
End If
End If
Case adLongVarChar, adLongVarWChar
If FieldSize = -1 Then
WriteFromUnsizedText F, fld
Else
If FieldSize > Threshold Then
WriteFromText F, fld, FieldSize
Else
sData = fld.Value
Put #F, , sData ' PUT tacks on overhead if use fld.Value
End If
End If
End Select
Close #F
End Sub
Sub WriteFromBinary(ByVal F As Long, fld As ADODB.Field, _
ByVal FieldSize As Long)
Dim Data() As Byte, BytesRead As Long
Do While FieldSize <> BytesRead
If FieldSize - BytesRead < BLOCK_SIZE Then
Data = fld.GetChunk(FieldSize - BLOCK_SIZE)
BytesRead = FieldSize
Else
Data = fld.GetChunk(BLOCK_SIZE)
BytesRead = BytesRead + BLOCK_SIZE
End If
Put #F, , Data
Loop
End Sub
Sub WriteFromUnsizedBinary(ByVal F As Long, fld As ADODB.Field)
Dim Data() As Byte, Temp As Variant
Do
Temp = fld.GetChunk(BLOCK_SIZE)
If IsNull(Temp) Then Exit Do
Data = Temp
Put #F, , Data
Loop While LenB(Temp) = BLOCK_SIZE
End Sub
Sub WriteFromText(ByVal F As Long, fld As ADODB.Field, _
ByVal FieldSize As Long)
Dim Data As String, CharsRead As Long
Do While FieldSize <> CharsRead
If FieldSize - CharsRead < BLOCK_SIZE Then
Data = fld.GetChunk(FieldSize - BLOCK_SIZE)
CharsRead = FieldSize
Else
Data = fld.GetChunk(BLOCK_SIZE)
CharsRead = CharsRead + BLOCK_SIZE
End If
Put #F, , Data
Loop
End Sub
Sub WriteFromUnsizedText(ByVal F As Long, fld As ADODB.Field)
Dim Data As String, Temp As Variant
Do
Temp = fld.GetChunk(BLOCK_SIZE)
If IsNull(Temp) Then Exit Do
Data = Temp
Put #F, , Data
Loop While Len(Temp) = BLOCK_SIZE
End Sub
Sub FileToBlob(ByVal FName As String, fld As ADODB.Field, _
Optional Threshold As Long = 1048576)
'
' Assumes file exists
' Assumes calling routine does the UPDATE
' File cannot exceed approx. 2Gb in size
'
Dim F As Long, Data() As Byte, FileSize As Long
F = FreeFile
Open FName For Binary As #F
FileSize = LOF(F)
Select Case fld.Type
Case adLongVarBinary
If FileSize > Threshold Then
ReadToBinary F, fld, FileSize
Else
Data = InputB(FileSize, F)
fld.Value = Data
End If
Case adLongVarChar, adLongVarWChar
If FileSize > Threshold Then
ReadToText F, fld, FileSize
Else
fld.Value = Input(FileSize, F)
End If
End Select
Close #F
End Sub
Sub ReadToBinary(ByVal F As Long, fld As ADODB.Field, _
ByVal FileSize As Long)
Dim Data() As Byte, BytesRead As Long
Do While FileSize <> BytesRead
If FileSize - BytesRead < BLOCK_SIZE Then
Data = InputB(FileSize - BytesRead, F)
BytesRead = FileSize
Else
Data = InputB(BLOCK_SIZE, F)
BytesRead = BytesRead + BLOCK_SIZE
End If
fld.AppendChunk Data
Loop
End Sub
Sub ReadToText(ByVal F As Long, fld As ADODB.Field, _
ByVal FileSize As Long)
Dim Data As String, CharsRead As Long
Do While FileSize <> CharsRead
If FileSize - CharsRead < BLOCK_SIZE Then
Data = Input(FileSize - CharsRead, F)
CharsRead = FileSize
Else
Data = Input(BLOCK_SIZE, F)
CharsRead = CharsRead + BLOCK_SIZE
End If
fld.AppendChunk Data
Loop
End Sub
- Run the project and click the cmdSave button.
- In the C:\ directory, you should find the following files:
notes1.txt
notes2.txt
notes3.txt
photo1.dat
photo2.dat
photo3.dat
The three "photo" files should be the same size as each other. The three
"notes" files should be the same size as each other.
- Click the cmdLoad button.
- Open the database using Access and you should see six additional
employees with photos and notes loaded back correctly.
Notes
Following are some suggestions for using BLOBs with ADO. These parallel
many of the suggestions in the following Microsoft Knowledge Base article:
Q153238 HOWTO: Use GetChunk and AppendChunk Methods of RDO Object
- It is more efficient in terms of retrieval of BLOB data to simply store
the data in files on the server with a pointer in the main record (or
you can use some sort of structured directory/file naming system based
on the primary key value). This has the advantage of (a) eliminating
server overhead, (b) allowing the files to be stored on a second server,
(c) allowing network security attributes to be set on individual files,
and (d) allowing retrieval of files even when the server is down. This
is especially true if the files are some sort of document type, such as
bitmaps (.bmp), word processor files (.doc), or spreadsheets (.xls)
where you can point the host application directly to the file on the
server.
- When using certain providers, most notably ODBC to SQL Server and other
databases, you may have to take special care in retrieving BLOB data,
such as placing BLOB columns at the end of the field list and
referencing all non-BLOB fields prior to access BLOB columns. This will
depend on a number of factors, such as:
- Provider (typically ODBC)
- Back-end server
- Cursor Location (typically client)
- Cursor Type
- Whether you're selecting from a VIEW or getting records returned from a
stored procedure.
Because this depends on a variety of factors, below is a guide if you are
having problems with BLOB columns:
- Try a native OLE DB provider instead of an ODBC provider.
- Use Server-side cursors (such as adOpenKeyset).
- Select the Primary Key column(s) in addition to any other columns.
- Select the BLOB columns last. Select individual fields, not "*".
- Access all non-BLOB columns first (store them if necessary).
- Access BLOB columns in the order specified. You may only be able to
reference it once before the cursor loses its value.
- When editing a BLOB column using the AppendChunk method, you may have
to edit at least one non-BLOB column in your recordset as well. BLOBs
are typically not updateable with Static or Forward-only cursors on
ODBC datasources.
- If you use ODBC to Jet, you can't update a recordset returned by a
stored procedures (QueryDef) at all because the driver forces them to
be read-only.
- The Microsoft Oracle OLE DB provider does not currently support random
Access to BLOB data with server-side cursors - the BLOB column must
appear to the end of the SELECT clause.
With the ODBC cursor library, it is not possible to use the GetChunk or
AppendChunk methods on a recordset returned from a stored procedure. This
is because the BLOB data is not normally retrieved with the rest of the
data in order to save bandwidth. When a stored procedure creates a
recordset, the cursor driver cannot determine how to query for the BLOB
data after the fact because it cannot determine the base tables or key
fields to use. Server-side cursors alleviate this problem but limit you to
a single statement per stored procedure (a SQL Server restriction).
The fact that users want to update their BLOB column demands that they
expose their base tables and create the cursor by using a standard select
statement from that base table. This would be true even if you were coding
directly to ODBC (not an ADO thing).
REFERENCES
For additional information, please see the following
articles in the Microsoft Knowledge Base:
Q185958 HOWTO: Use ADO GetChunk/AppendChunk with Oracle for BLOB Data
Q173611 IX: ADO GetChunk Method Breaks Blob Data for DBCS
Q189415 FILE: AdoChunk.exe Using GetChunk and AppendChunk in
Visual C++
Using Data Access Objects:
Q103257 ACC: Reading, Storing, &Writing Binary Large
Objects (BLOBs)
© Microsoft Corporation 1999, All Rights Reserved. Contributions by Malcolm Stewart, Microsoft Corporation
Additional query words:
Keywords : kbAccess kbADO150 kbADO200 kbDatabase kbSQLServ kbVBp600 kbGrpVBDB kbGrpMDAC kbDSupport kbADO210sp2 kbADO210sp2bug
Version : WINDOWS:1.5,2.0,2.1 SP2,6.0
Platform : WINDOWS
Issue type : kbhowto
|