This is the Trace Id: 60ee8f0ab0559a6729ff33e81f84a28b

Considerations of Data Partitioning on Spark during Data Loading on Clustered ColumnStore Index

Considerations of Data Partitioning on Spark during Data Loading on Clustered ColumnStore Index

Important! Selecting a language below will dynamically change the complete page content to that language.

Download
  • Version:

    1.0

    Date Published:

    15/06/2024

    File Name:

    Considerations of Data Partitioning on Spark during Data Loading on Clustered ColumnStore Index.pdf

    File Size:

    1.0 MB

    Considerations of Data Partitioning on Spark during Data Loading on Clustered ColumnStore Index
  • Supported Operating Systems

    Windows 11, Windows 10

    none
  • Bulk load methods on SQL Server are by default serial, which means for example, one BULK INSERT statement would spawn only one thread to insert the data into a table. However, for concurrent loads you may insert into the same table using multiple BULK INSERT statements, provided there are multiple files to be read.