Work around SQL Server maximum columns limit 1024 and 8kb record size
Work around SQL Server maximum columns limit 1024 and 8kb record size
I am creating a table with 1000 columns. Most of the columns are nvarchar
type. Table is created, but with a warning
Warning: The table "Test" has been created, but its maximum row size exceeds the allowed maximum of 8060 bytes. INSERT or UPDATE to this table will fail if the resulting row exceeds the size limit.
Most of the columns of the table already have data in it (i.e. 99% of columns have data). When I am trying to update any column after the 310th (where as all starting 309 columns having some value) it gives error:
Cannot create a row of size 8061 which is greater than the allowable maximum row size of 8060.
I am inserting this data to all starting 308 columns
"Lorem ipsum dolor sit amet, consectetur adipisicing elit."
When I am using ntext
data type then it is allowing me to update about 450 columns but beyond of that ntext
is also not allowing me. I have to update at least 700 columns. Which SQL Server is not allowing to do that. I have the scenario that I cannot move some columns of table to another table.
Actually I am working for an existing window application. It's a very large windows application.
Actually the table in which I am trying to insert up to 700 nvarchar columns data is created dynamically at runtime. Only in some cases it requires to insert 400-600 columns. But generally it need 100 -200 columns which i am able to process easily.
The problem is that I cannot split this table in multiple tables. Because a lots of tables created with this structures and names of tables are maintained in another table i.e. there are more than 100 tables with this structure and they are being created dynamically. For creating the table and manipulating its data 4-5 languages(C#, Java..) are being used and WCF, Windows Service and Webservices also Involves.
So I don't think that it would be easy manipulate the table and its data after splitting the table. If I split the table then it would require lots of structural changes.
So please suggest me that what would be the best way to solve this issue.
I have also tried to use Sparse Column like:
Create table ABCD(Id int, Name varchar(100) Sparse, Age int);
I have also thought about ColumnStoreIndex but my purpose is not solved.
Sparse column allow me to create 3000 columns for a table but it also restrict me on page size.
Is any way to achieve it using some temporary table or by using any other type of SQL server object?
Answer by joe for Work around SQL Server maximum columns limit 1024 and 8kb record size
There are limits for each row in SQL server.
http://msdn.microsoft.com/en-us/library/ms143432.aspx
gives details
Answer by Atheer Mostafa for Work around SQL Server maximum columns limit 1024 and 8kb record size
Max Columns per 'nonwide' table: 1,024 Max Columns per 'wide' table: 30,000
Although what is exactly the case you require this number per single table ? It's highly recommended to partition your table vertically several times to get better performance and easier development.
Answer by Stevan Trajkoski for Work around SQL Server maximum columns limit 1024 and 8kb record size
Having table with 1.000 columns tells you that there is something very wrong in database design.
I have inherited project in which one of the tables had more than 500 columns and after more than one year I am still unable to significantly reduce it, because I will have to rework 90% of the application.
So redesign your DB before it is too late.
Answer by Martin Smith for Work around SQL Server maximum columns limit 1024 and 8kb record size
This simply isn't possible. See Inside the Storage Engine: Anatomy of a record
Assuming your table is something like this.
CREATE TABLE T1( col_1 varchar(8000) NULL, col_2 varchar(8000) NULL, /*....*/ col_999 varchar(8000) NULL, col_1000 varchar(8000) NULL )
Then even a row with all NULL
values will use the following storage.
- 1 byte status bits A
- 1 byte status bits B
- 2 bytes column count offset
- 125 bytes
NULL_BITMAP
(1bit
per column for 1,000 columns)
So that is a guaranteed 129 bytes used up already (leaving 7,931).
If any of the columns have a value that is not either NULL
or an empty string then you also need space for
- 2 bytes variable length column count (leaving 7,929).
- Anywhere between 2 - 2000 bytes for the column offset array.
- The data itself.
The column offset array consumes 2 bytes per variable length column except if that column and all later columns are also zero length. So updating col_1000
would force the entire 2000 bytes to be used whereas updating col_1
would just use 2 bytes.
So you could populate each column with 5 bytes of data and when taking into account the 2 bytes each in the column offset array that would add up to 7,000 bytes which is within the 7,929 remaining.
However the data you are storing is 102 bytes (51 nvarchar
characters) so this can be stored off row with a 24 byte pointer to the actual data remaining in row.
FLOOR(7929/(24 + 2)) = 304
So the best case would be that you could store 304 columns of this length data and that is if you are updating from col_1
, col_2
, ...
. If col_1000
contains data then the calculation is
FLOOR(5929/24) = 247
For NTEXT
the calculation is similar except it can use a 16 byte pointer which would allow you to squeeze data into a few extra columns
FLOOR(7929/(16 + 2)) = 440
The need to follow all these off row pointers for any SELECT
against the table would likely be highly detrimental to performance.
Script to test this
DROP TABLE T1 /* Create table with 1000 columns*/ DECLARE @CreateTableScript nvarchar(max) = 'CREATE TABLE T1(' SELECT @CreateTableScript += 'col_' + LTRIM(number) + ' VARCHAR(8000),' FROM master..spt_values WHERE type='P' AND number BETWEEN 1 AND 1000 ORDER BY number SELECT @CreateTableScript += ')' EXEC(@CreateTableScript) /* Insert single row with all NULL*/ INSERT INTO T1 DEFAULT VALUES /*Updating first 304 cols succeed. Change to 305 and it fails*/ DECLARE @UpdateTableScript nvarchar(max) = 'UPDATE T1 SET ' SELECT @UpdateTableScript += 'col_' + LTRIM(number) + ' = REPLICATE(1,1000),' FROM master..spt_values WHERE type='P' AND number BETWEEN 1 AND 304 ORDER BY number SET @UpdateTableScript = LEFT(@UpdateTableScript,LEN(@UpdateTableScript)-1) EXEC(@UpdateTableScript)
Answer by Manish for Work around SQL Server maximum columns limit 1024 and 8kb record size
Creating table with n number of columns and datatype Nvarchar
CREATE Proc [dbo].[CreateMaxColTable_Nvarchar500] (@TableName nvarchar(100),@NumofCols int) AS BEGIN DECLARE @i INT DECLARE @MAX INT DECLARE @SQL VARCHAR(MAX) DECLARE @j VARCHAR(10) DECLARE @len int SELECT @i=1 SELECT @MAX=@NumofCols SET @SQL='CREATE TABLE ' + @TableName + '(' WHILE @i<=@MAX BEGIN select @j= cast(@i as varchar) SELECT @SQL= @SQL+'A'+@j +' NVARCHAR(500) , ' SET @i = @i + 1 END select @len=len(@SQL) select @SQL = substring(@SQL,0,@len-1) SELECT @SQL= @SQL+ ' )' exec (@SQL) END
Answer by Kaushik Sharma for Work around SQL Server maximum columns limit 1024 and 8kb record size
SQL Server Maximum Columns Limit
Bytes per short string column 8,000
Bytes per GROUP BY, ORDER BY 8,060
Bytes per row 8,060
Columns per index key 16
Columns per foreign key 16
Columns per primary key 16
Columns per nonwide table 1,024
Columns per wide table 30,000
Columns per SELECT statement 4,096
Columns per INSERT statement 4096
Columns per UPDATE statement (Wide Tables) 4096
When you combine varchar, nvarchar, varbinary, sql_variant, or CLR user-defined type columns that exceed 8,060 bytes per row, consider the following:
Surpassing the 8,060-byte row-size limit might affect performance because SQL Server still maintains a limit of 8 KB per page. When a combination of varchar, nvarchar, varbinary, sql_variant, or CLR user-defined type columns exceeds this limit, the SQL Server Database Engine moves the record column with the largest width to another page in the ROW_OVERFLOW_DATA allocation unit, while maintaining a 24-byte pointer on the original page. Moving large records to another page occurs dynamically as records are lengthened based on update operations. Update operations that shorten records may cause records to be moved back to the original page in the IN_ROW_DATA allocation unit. Also, querying and performing other select operations, such as sorts or joins on large records that contain row-overflow data slows processing time, because these records are processed synchronously instead of asynchronously.
Therefore, when you design a table with multiple varchar, nvarchar, varbinary, sql_variant, or CLR user-defined type columns, consider the percentage of rows that are likely to flow over and the frequency with which this overflow data is likely to be queried. If there are likely to be frequent queries on many rows of row-overflow data, consider normalizing the table so that some columns are moved to another table. This can then be queried in an asynchronous JOIN operation.
- The length of individual columns must still fall within the limit of 8,000 bytes for varchar, nvarchar, varbinary, sql_variant, and CLR user-defined type columns. Only their combined lengths can exceed the 8,060-byte row limit of a table.
- The sum of other data type columns, including char and nchar data, must fall within the 8,060-byte row limit. Large object data is also exempt from the 8,060-byte row limit.
- The index key of a clustered index cannot contain varchar columns that have existing data in the ROW_OVERFLOW_DATA allocation unit. If a clustered index is created on a varchar column and the existing data is in the IN_ROW_DATA allocation unit, subsequent insert or update actions on the column that would push the data off-row will fail. For more information about allocation units, see Table and Index Organization.
- You can include columns that contain row-overflow data as key or nonkey columns of a nonclustered index.
- The record-size limit for tables that use sparse columns is 8,018 bytes. When the converted data plus existing record data exceeds 8,018 bytes, MSSQLSERVER ERROR 576 is returned. When columns are converted between sparse and nonsparse types, Database Engine keeps a copy of the current record data. This temporarily doubles the storage that is required for the record. .
- To obtain information about tables or indexes that might contain row-overflow data, use the sys.dm_db_index_physical_stats dynamic management function.
Creating table with n number of columns and datatype Nvarchar
CREATE Proc [dbo].[CreateMaxColTable_Nvarchar500] (@TableName nvarchar(100),@NumofCols int) AS BEGIN DECLARE @i INT DECLARE @MAX INT DECLARE @SQL VARCHAR(MAX) DECLARE @j VARCHAR(10) DECLARE @len int SELECT @i=1 SELECT @MAX=@NumofCols SET @SQL='CREATE TABLE ' + @TableName + '(' WHILE @i<=@MAX BEGIN select @j= cast(@i as varchar) SELECT @SQL= @SQL+'X'+@j +' NVARCHAR(500) , ' SET @i = @i + 1 END select @len=len(@SQL) select @SQL = substring(@SQL,0,@len-1) SELECT @SQL= @SQL+ ' )' exec (@SQL) END
For more information you can visit these links:
http://technet.microsoft.com/en-us/library/ms143432.aspx
But please could you tell the scenario why do you need a table with so much columns? I think you should consider about the re-design of the database.
Answer by Kunal for Work around SQL Server maximum columns limit 1024 and 8kb record size
We had application which captures 5000 fields for a loan application. All fields are dependent on a single primary key loanid. We could have split the table into multiples but the fields are also dynamic. The admin also has a feature to create more fields. So everything is dynamic. They only good thing was a one to one relationship between loanid and fields.
So, in the end we went with XML solution. The entire data is store in an xml document. Maximum flexibility but makes it diffifcult to query and report of.
Fatal error: Call to a member function getElementsByTagName() on a non-object in D:\XAMPP INSTALLASTION\xampp\htdocs\endunpratama9i\www-stackoverflow-info-proses.php on line 72
0 comments:
Post a Comment