diff --git a/markdown-pages/en/tidbcloud/master/TOC.md b/markdown-pages/en/tidbcloud/master/TOC.md index a6900fc..3e2bc4c 100644 --- a/markdown-pages/en/tidbcloud/master/TOC.md +++ b/markdown-pages/en/tidbcloud/master/TOC.md @@ -22,7 +22,7 @@ - [Transactions](/tidb-cloud/transaction-concepts.md) - [SQL](/tidb-cloud/sql-concepts.md) - [AI Features](/tidb-cloud/ai-feature-concepts.md) - - [Data Service](/tidb-cloud/data-service-concepts.md) ![BETA](https://download.pingcap.com/images/docs/tidb-cloud/blank_transparent_placeholder.png) + - [Data Service](/tidb-cloud/data-service-concepts.md) ![BETA](/media/tidb-cloud/blank_transparent_placeholder.png) - [Scalability](/tidb-cloud/scalability-concepts.md) - High Availability - [High Availability in TiDB Cloud Serverless](/tidb-cloud/serverless-high-availability.md) @@ -76,7 +76,7 @@ - [mysql2](/develop/dev-guide-sample-application-ruby-mysql2.md) - [Rails](/develop/dev-guide-sample-application-ruby-rails.md) - [WordPress](/tidb-cloud/dev-guide-wordpress.md) - - Serverless Driver ![BETA](https://download.pingcap.com/images/docs/tidb-cloud/blank_transparent_placeholder.png) + - Serverless Driver ![BETA](/media/tidb-cloud/blank_transparent_placeholder.png) - [TiDB Cloud Serverless Driver](/tidb-cloud/serverless-driver.md) - [Node.js Example](/tidb-cloud/serverless-driver-node-example.md) - [Prisma Example](/tidb-cloud/serverless-driver-prisma-example.md) @@ -144,7 +144,7 @@ - [Connection Overview](/tidb-cloud/connect-to-tidb-cluster-serverless.md) - [Connect via Public Endpoint](/tidb-cloud/connect-via-standard-connection-serverless.md) - [Connect via Private Endpoint](/tidb-cloud/set-up-private-endpoint-connections-serverless.md) - - Branch ![BETA](https://download.pingcap.com/images/docs/tidb-cloud/blank_transparent_placeholder.png) + - Branch ![BETA](/media/tidb-cloud/blank_transparent_placeholder.png) - [Overview](/tidb-cloud/branch-overview.md) - [Manage Branches](/tidb-cloud/branch-manage.md) - [GitHub Integration](/tidb-cloud/branch-github-integration.md) @@ -180,12 +180,12 @@ - [Built-in Metrics](/tidb-cloud/built-in-monitoring.md) - [Built-in Alerting](/tidb-cloud/monitor-built-in-alerting.md) - [Cluster Events](/tidb-cloud/tidb-cloud-events.md) - - [Third-Party Metrics Integrations](/tidb-cloud/third-party-monitoring-integrations.md) ![BETA](https://download.pingcap.com/images/docs/tidb-cloud/blank_transparent_placeholder.png) + - [Third-Party Metrics Integrations](/tidb-cloud/third-party-monitoring-integrations.md) ![BETA](/media/tidb-cloud/blank_transparent_placeholder.png) - Tune Performance - [Overview](/tidb-cloud/tidb-cloud-tune-performance-overview.md) - Analyze Performance - [Use the Diagnosis Tab](/tidb-cloud/tune-performance.md) - - [Use Index Insight](/tidb-cloud/index-insight.md) ![BETA](https://download.pingcap.com/images/docs/tidb-cloud/blank_transparent_placeholder.png) + - [Use Index Insight](/tidb-cloud/index-insight.md) ![BETA](/media/tidb-cloud/blank_transparent_placeholder.png) - [Use Statement Summary Tables](/statement-summary-tables.md) - SQL Tuning - [Overview](/tidb-cloud/tidb-cloud-sql-tuning-overview.md) @@ -253,12 +253,14 @@ - [Migrate from TiDB Self-Managed to TiDB Cloud](/tidb-cloud/migrate-from-op-tidb.md) - [Migrate from MySQL-Compatible Databases Using AWS DMS](/tidb-cloud/migrate-from-mysql-using-aws-dms.md) - [Migrate from Amazon RDS for Oracle Using AWS DMS](/tidb-cloud/migrate-from-oracle-using-aws-dms.md) - - Import Data into TiDB Cloud - - [Import Local Files](/tidb-cloud/tidb-cloud-import-local-files.md) - - [Import Sample Data (SQL File)](/tidb-cloud/import-sample-data.md) + - Import Data into TiDB Cloud Dedicated + - [Import Sample Data](/tidb-cloud/import-sample-data.md) - [Import CSV Files from Amazon S3 or GCS](/tidb-cloud/import-csv-files.md) - [Import Apache Parquet Files from Amazon S3 or GCS](/tidb-cloud/import-parquet-files.md) - [Import with MySQL CLI](/tidb-cloud/import-with-mysql-cli.md) + - Import Data into TiDB Cloud Serverless + - [Import Local Files](/tidb-cloud/tidb-cloud-import-local-files.md) + - [Import with MySQL CLI](/tidb-cloud/import-with-mysql-cli.md) - Reference - [Configure External Storage Access for TiDB Dedicated](/tidb-cloud/config-s3-and-gcs-access.md) - [Configure External Storage Access for TiDB Serverless](/tidb-cloud/serverless-external-storage.md) @@ -268,9 +270,9 @@ - [Precheck Errors, Migration Errors, and Alerts for Data Migration](/tidb-cloud/tidb-cloud-dm-precheck-and-troubleshooting.md) - [Connect AWS DMS to TiDB Cloud clusters](/tidb-cloud/tidb-cloud-connect-aws-dms.md) - Explore Data - - [Chat2Query in SQL Editor](/tidb-cloud/explore-data-with-chat2query.md) ![BETA](https://download.pingcap.com/images/docs/tidb-cloud/blank_transparent_placeholder.png) + - [Chat2Query in SQL Editor](/tidb-cloud/explore-data-with-chat2query.md) ![BETA](/media/tidb-cloud/blank_transparent_placeholder.png) - [SQL Proxy Account](/tidb-cloud/sql-proxy-account.md) -- Vector Search ![BETA](https://download.pingcap.com/images/docs/tidb-cloud/blank_transparent_placeholder.png) +- Vector Search ![BETA](/media/tidb-cloud/blank_transparent_placeholder.png) - [Overview](/tidb-cloud/vector-search-overview.md) - Get Started - [Get Started with SQL](/tidb-cloud/vector-search-get-started-using-sql.md) @@ -293,7 +295,7 @@ - [Improve Performance](/tidb-cloud/vector-search-improve-performance.md) - [Limitations](/tidb-cloud/vector-search-limitations.md) - [Changelogs](/tidb-cloud/vector-search-changelogs.md) -- Data Service ![BETA](https://download.pingcap.com/images/docs/tidb-cloud/blank_transparent_placeholder.png) +- Data Service ![BETA](/media/tidb-cloud/blank_transparent_placeholder.png) - [Overview](/tidb-cloud/data-service-overview.md) - [Get Started](/tidb-cloud/data-service-get-started.md) - Chat2Query API @@ -675,7 +677,7 @@ - [Metadata Lock](/metadata-lock.md) - [Use UUIDs](/best-practices/uuid.md) - [TiDB Accelerated Table Creation](/accelerated-table-creation.md) -- API Reference ![BETA](https://download.pingcap.com/images/docs/tidb-cloud/blank_transparent_placeholder.png) +- API Reference ![BETA](/media/tidb-cloud/blank_transparent_placeholder.png) - [Overview](/tidb-cloud/api-overview.md) - v1beta1 - [Billing](https://docs.pingcap.com/tidbcloud/api/v1beta1/billing) @@ -683,7 +685,7 @@ - [IAM](https://docs.pingcap.com/tidbcloud/api/v1beta1/iam) - [MSP (Deprecated)](https://docs.pingcap.com/tidbcloud/api/v1beta1/msp) - [v1beta](https://docs.pingcap.com/tidbcloud/api/v1beta) -- CLI Reference ![BETA](https://download.pingcap.com/images/docs/tidb-cloud/blank_transparent_placeholder.png) +- CLI Reference ![BETA](/media/tidb-cloud/blank_transparent_placeholder.png) - [Overview](/tidb-cloud/cli-reference.md) - auth - [login](/tidb-cloud/ticloud-auth-login.md) diff --git a/markdown-pages/en/tidbcloud/master/tidb-cloud/import-csv-files.md b/markdown-pages/en/tidbcloud/master/tidb-cloud/import-csv-files.md new file mode 100644 index 0000000..87d17ea --- /dev/null +++ b/markdown-pages/en/tidbcloud/master/tidb-cloud/import-csv-files.md @@ -0,0 +1,217 @@ +--- +title: Import CSV Files from Amazon S3 or GCS into TiDB Cloud Dedicated +summary: Learn how to import CSV files from Amazon S3 or GCS into TiDB Cloud Dedicated. +aliases: ['/tidbcloud/migrate-from-amazon-s3-or-gcs','/tidbcloud/migrate-from-aurora-bulk-import'] +--- + +# Import CSV Files from Amazon S3 or GCS into TiDB Cloud Dedicated + +This document describes how to import CSV files from Amazon Simple Storage Service (Amazon S3) or Google Cloud Storage (GCS) into TiDB Cloud Dedicated. + +## Limitations + +- To ensure data consistency, TiDB Cloud allows to import CSV files into empty tables only. To import data into an existing table that already contains data, you can use TiDB Cloud to import the data into a temporary empty table by following this document, and then use the `INSERT SELECT` statement to copy the data to the target existing table. + +- If a TiDB Cloud Dedicated cluster has a [changefeed](/tidb-cloud/changefeed-overview.md) or has [Point-in-time Restore](/tidb-cloud/backup-and-restore.md#turn-on-point-in-time-restore) enabled, you cannot import data to the cluster (the **Import Data** button will be disabled), because the current import data feature uses the [physical import mode](https://docs.pingcap.com/tidb/stable/tidb-lightning-physical-import-mode). In this mode, the imported data does not generate change logs, so the changefeed and Point-in-time Restore cannot detect the imported data. + +## Step 1. Prepare the CSV files + +1. If a CSV file is larger than 256 MB, consider splitting it into smaller files, each with a size around 256 MB. + + TiDB Cloud supports importing very large CSV files but performs best with multiple input files around 256 MB in size. This is because TiDB Cloud can process multiple files in parallel, which can greatly improve the import speed. + +2. Name the CSV files as follows: + + - If a CSV file contains all data of an entire table, name the file in the `${db_name}.${table_name}.csv` format, which maps to the `${db_name}.${table_name}` table when you import the data. + - If the data of one table is separated into multiple CSV files, append a numeric suffix to these CSV files. For example, `${db_name}.${table_name}.000001.csv` and `${db_name}.${table_name}.000002.csv`. The numeric suffixes can be inconsecutive but must be in ascending order. You also need to add extra zeros before the number to ensure all the suffixes are in the same length. + - TiDB Cloud supports importing compressed files in the following formats: `.gzip`, `.gz`, `.zstd`, `.zst` and `.snappy`. If you want to import compressed CSV files, name the files in the `${db_name}.${table_name}.${suffix}.csv.${compress}` format, in which `${suffix}` is optional and can be any integer such as '000001'. For example, if you want to import the `trips.000001.csv.gz` file to the `bikeshare.trips` table, you need to rename the file as `bikeshare.trips.000001.csv.gz`. + + > **Note:** + > + > - You only need to compress the data files, not the database or table schema files. + > - To achieve better performance, it is recommended to limit the size of each compressed file to 100 MiB. + > - The Snappy compressed file must be in the [official Snappy format](https://github.com/google/snappy). Other variants of Snappy compression are not supported. + > - For uncompressed files, if you cannot update the CSV filenames according to the preceding rules in some cases (for example, the CSV file links are also used by your other programs), you can keep the filenames unchanged and use the **Mapping Settings** in [Step 4](#step-4-import-csv-files-to-tidb-cloud) to import your source data to a single target table. + +## Step 2. Create the target table schemas + +Because CSV files do not contain schema information, before importing data from CSV files into TiDB Cloud, you need to create the table schemas using either of the following methods: + +- Method 1: In TiDB Cloud, create the target databases and tables for your source data. + +- Method 2: In the Amazon S3 or GCS directory where the CSV files are located, create the target table schema files for your source data as follows: + + 1. Create database schema files for your source data. + + If your CSV files follow the naming rules in [Step 1](#step-1-prepare-the-csv-files), the database schema files are optional for the data import. Otherwise, the database schema files are mandatory. + + Each database schema file must be in the `${db_name}-schema-create.sql` format and contain a `CREATE DATABASE` DDL statement. With this file, TiDB Cloud will create the `${db_name}` database to store your data when you import the data. + + For example, if you create a `mydb-scehma-create.sql` file that contains the following statement, TiDB Cloud will create the `mydb` database when you import the data. + + ```sql + CREATE DATABASE mydb; + ``` + + 2. Create table schema files for your source data. + + If you do not include the table schema files in the Amazon S3 or GCS directory where the CSV files are located, TiDB Cloud will not create the corresponding tables for you when you import the data. + + Each table schema file must be in the `${db_name}.${table_name}-schema.sql` format and contain a `CREATE TABLE` DDL statement. With this file, TiDB Cloud will create the `${db_table}` table in the `${db_name}` database when you import the data. + + For example, if you create a `mydb.mytable-schema.sql` file that contains the following statement, TiDB Cloud will create the `mytable` table in the `mydb` database when you import the data. + + ```sql + CREATE TABLE mytable ( + ID INT, + REGION VARCHAR(20), + COUNT INT ); + ``` + + > **Note:** + > + > Each `${db_name}.${table_name}-schema.sql` file should only contain a single DDL statement. If the file contains multiple DDL statements, only the first one takes effect. + +## Step 3. Configure cross-account access + +To allow TiDB Cloud to access the CSV files in the Amazon S3 or GCS bucket, do one of the following: + +- If your CSV files are located in Amazon S3, [configure Amazon S3 access](/tidb-cloud/config-s3-and-gcs-access.md#configure-amazon-s3-access). + + You can use either an AWS access key or a Role ARN to access your bucket. Once finished, make a note of the access key (including the access key ID and secret access key) or the Role ARN value as you will need it in [Step 4](#step-4-import-csv-files-to-tidb-cloud). + +- If your CSV files are located in GCS, [configure GCS access](/tidb-cloud/config-s3-and-gcs-access.md#configure-gcs-access). + +## Step 4. Import CSV files to TiDB Cloud + +To import the CSV files to TiDB Cloud, take the following steps: + + +
+ +1. Open the **Import** page for your target cluster. + + 1. Log in to the [TiDB Cloud console](https://tidbcloud.com/) and navigate to the [**Clusters**](https://tidbcloud.com/console/clusters) page of your project. + + > **Tip:** + > + > If you have multiple projects, you can click in the lower-left corner and switch to another project. + + 2. Click the name of your target cluster to go to its overview page, and then click **Import** in the left navigation pane. + +2. Select **Import data from S3**. + + If this is your first time importing data into this cluster, select **Import From Amazon S3**. + +3. On the **Import Data from Amazon S3** page, provide the following information for the source CSV files: + + - **Import File Count**: select **One file** or **Multiple files** as needed. + - **Included Schema Files**: this field is only visible when importing multiple files. If the source folder contains the target table schemas, select **Yes**. Otherwise, select **No**. + - **Data Format**: select **CSV**. + - **File URI** or **Folder URI**: + - When importing one file, enter the source file URI and name in the following format `s3://[bucket_name]/[data_source_folder]/[file_name].csv`. For example, `s3://sampledata/ingest/TableName.01.csv`. + - When importing multiple files, enter the source file URI and name in the following format `s3://[bucket_name]/[data_source_folder]/`. For example, `s3://sampledata/ingest/`. + - **Bucket Access**: you can use either an AWS Role ARN or an AWS access key to access your bucket. For more information, see [Configure Amazon S3 access](/tidb-cloud/config-s3-and-gcs-access.md#configure-amazon-s3-access). + - **AWS Role ARN**: enter the AWS Role ARN value. + - **AWS Access Key**: enter the AWS access key ID and AWS secret access key. + +4. Click **Connect**. + +5. In the **Destination** section, select the target database and table. + + When importing multiple files, you can use **Advanced Settings** > **Mapping Settings** to define a custom mapping rule for each target table and its corresponding CSV file. After that, the data source files will be re-scanned using the provided custom mapping rule. + + When you enter the source file URI and name in **Source File URIs and Names**, make sure it is in the following format `s3://[bucket_name]/[data_source_folder]/[file_name].csv`. For example, `s3://sampledata/ingest/TableName.01.csv`. + + You can also use wildcards to match the source files. For example: + + - `s3://[bucket_name]/[data_source_folder]/my-data?.csv`: all CSV files starting with `my-data` followed by one character (such as `my-data1.csv` and `my-data2.csv`) in that folder will be imported into the same target table. + + - `s3://[bucket_name]/[data_source_folder]/my-data*.csv`: all CSV files in the folder starting with `my-data` will be imported into the same target table. + + Note that only `?` and `*` are supported. + + > **Note:** + > + > The URI must contain the data source folder. + +6. Click **Start Import**. + +7. When the import progress shows **Completed**, check the imported tables. + +
+ +
+ +1. Open the **Import** page for your target cluster. + + 1. Log in to the [TiDB Cloud console](https://tidbcloud.com/) and navigate to the [**Clusters**](https://tidbcloud.com/console/clusters) page of your project. + + > **Tip:** + > + > If you have multiple projects, you can click in the lower-left corner and switch to another project. + + 2. Click the name of your target cluster to go to its overview page, and then click **Import** in the left navigation pane. + +2. Click **Import Data** in the upper-right corner. + + If this is your first time importing data into this cluster, select **Import From GCS**. + +3. On the **Import Data from GCS** page, provide the following information for the source CSV files: + + - **Import File Count**: select **One file** or **Multiple files** as needed. + - **Included Schema Files**: this field is only visible when importing multiple files. If the source folder contains the target table schemas, select **Yes**. Otherwise, select **No**. + - **Data Format**: select **CSV**. + - **File URI** or **Folder URI**: + - When importing one file, enter the source file URI and name in the following format `gs://[bucket_name]/[data_source_folder]/[file_name].csv`. For example, `gs://sampledata/ingest/TableName.01.csv`. + - When importing multiple files, enter the source file URI and name in the following format `gs://[bucket_name]/[data_source_folder]/`. For example, `gs://sampledata/ingest/`. + - **Bucket Access**: you can use a GCS IAM Role to access your bucket. For more information, see [Configure GCS access](/tidb-cloud/config-s3-and-gcs-access.md#configure-gcs-access). + +4. Click **Connect**. + +5. In the **Destination** section, select the target database and table. + + When importing multiple files, you can use **Advanced Settings** > **Mapping Settings** to define a custom mapping rule for each target table and its corresponding CSV file. After that, the data source files will be re-scanned using the provided custom mapping rule. + + When you enter the source file URI and name in **Source File URIs and Names**, make sure it is in the following format `gs://[bucket_name]/[data_source_folder]/[file_name].csv`. For example, `gs://sampledata/ingest/TableName.01.csv`. + + You can also use wildcards to match the source files. For example: + + - `gs://[bucket_name]/[data_source_folder]/my-data?.csv`: all CSV files starting with `my-data` followed by one character (such as `my-data1.csv` and `my-data2.csv`) in that folder will be imported into the same target table. + + - `gs://[bucket_name]/[data_source_folder]/my-data*.csv`: all CSV files in the folder starting with `my-data` will be imported into the same target table. + + Note that only `?` and `*` are supported. + + > **Note:** + > + > The URI must contain the data source folder. + +6. Click **Start Import**. + +7. When the import progress shows **Completed**, check the imported tables. + +
+ +
+ +When you run an import task, if any unsupported or invalid conversions are detected, TiDB Cloud terminates the import job automatically and reports an importing error. + +If you get an importing error, do the following: + +1. Drop the partially imported table. +2. Check the table schema file. If there are any errors, correct the table schema file. +3. Check the data types in the CSV files. +4. Try the import task again. + +## Troubleshooting + +### Resolve warnings during data import + +After clicking **Start Import**, if you see a warning message such as `can't find the corresponding source files`, resolve this by providing the correct source file, renaming the existing one according to [Naming Conventions for Data Import](/tidb-cloud/naming-conventions-for-data-import.md), or using **Advanced Settings** to make changes. + +After resolving these issues, you need to import the data again. + +### Zero rows in the imported tables + +After the import progress shows **Completed**, check the imported tables. If the number of rows is zero, it means no data files matched the Bucket URI that you entered. In this case, resolve this issue by providing the correct source file, renaming the existing one according to [Naming Conventions for Data Import](/tidb-cloud/naming-conventions-for-data-import.md), or using **Advanced Settings** to make changes. After that, import those tables again. diff --git a/markdown-pages/en/tidbcloud/master/tidb-cloud/import-parquet-files.md b/markdown-pages/en/tidbcloud/master/tidb-cloud/import-parquet-files.md new file mode 100644 index 0000000..1253945 --- /dev/null +++ b/markdown-pages/en/tidbcloud/master/tidb-cloud/import-parquet-files.md @@ -0,0 +1,246 @@ +--- +title: Import Apache Parquet Files from Amazon S3 or GCS into TiDB Cloud Dedicated +summary: Learn how to import Apache Parquet files from Amazon S3 or GCS into TiDB Cloud Dedicated. +--- + +# Import Apache Parquet Files from Amazon S3 or GCS into TiDB Cloud Dedicated + +You can import both uncompressed and Snappy compressed [Apache Parquet](https://parquet.apache.org/) format data files to TiDB Cloud Dedicated. This document describes how to import Parquet files from Amazon Simple Storage Service (Amazon S3) or Google Cloud Storage (GCS) into TiDB Cloud Dedicated. + +> **Note:** +> +> - TiDB Cloud only supports importing Parquet files into empty tables. To import data into an existing table that already contains data, you can use TiDB Cloud to import the data into a temporary empty table by following this document, and then use the `INSERT SELECT` statement to copy the data to the target existing table. +> - If there is a changefeed in a TiDB Cloud Dedicated cluster, you cannot import data to the cluster (the **Import Data** button will be disabled), because the current import data feature uses the [physical import mode](https://docs.pingcap.com/tidb/stable/tidb-lightning-physical-import-mode). In this mode, the imported data does not generate change logs, so the changefeed cannot detect the imported data. +> - The Snappy compressed file must be in the [official Snappy format](https://github.com/google/snappy). Other variants of Snappy compression are not supported. + +## Step 1. Prepare the Parquet files + +> **Note:** +> +> Currently, TiDB Cloud does not support importing Parquet files that contain any of the following data types. If Parquet files to be imported contain such data types, you need to first regenerate the Parquet files using the [supported data types](#supported-data-types) (for example, `STRING`). Alternatively, you could use a service such as AWS Glue to transform data types easily. +> +> - `LIST` +> - `NEST STRUCT` +> - `BOOL` +> - `ARRAY` +> - `MAP` + +1. If a Parquet file is larger than 256 MB, consider splitting it into smaller files, each with a size around 256 MB. + + TiDB Cloud supports importing very large Parquet files but performs best with multiple input files around 256 MB in size. This is because TiDB Cloud can process multiple files in parallel, which can greatly improve the import speed. + +2. Name the Parquet files as follows: + + - If a Parquet file contains all data of an entire table, name the file in the `${db_name}.${table_name}.parquet` format, which maps to the `${db_name}.${table_name}` table when you import the data. + - If the data of one table is separated into multiple Parquet files, append a numeric suffix to these Parquet files. For example, `${db_name}.${table_name}.000001.parquet` and `${db_name}.${table_name}.000002.parquet`. The numeric suffixes can be inconsecutive but must be in ascending order. You also need to add extra zeros before the number to ensure all the suffixes are in the same length. + + > **Note:** + > + > If you cannot update the Parquet filenames according to the preceding rules in some cases (for example, the Parquet file links are also used by your other programs), you can keep the filenames unchanged and use the **Mapping Settings** in [Step 4](#step-4-import-parquet-files-to-tidb-cloud) to import your source data to a single target table. + +## Step 2. Create the target table schemas + +Because Parquet files do not contain schema information, before importing data from Parquet files into TiDB Cloud, you need to create the table schemas using either of the following methods: + +- Method 1: In TiDB Cloud, create the target databases and tables for your source data. + +- Method 2: In the Amazon S3 or GCS directory where the Parquet files are located, create the target table schema files for your source data as follows: + + 1. Create database schema files for your source data. + + If your Parquet files follow the naming rules in [Step 1](#step-1-prepare-the-parquet-files), the database schema files are optional for the data import. Otherwise, the database schema files are mandatory. + + Each database schema file must be in the `${db_name}-schema-create.sql` format and contain a `CREATE DATABASE` DDL statement. With this file, TiDB Cloud will create the `${db_name}` database to store your data when you import the data. + + For example, if you create a `mydb-scehma-create.sql` file that contains the following statement, TiDB Cloud will create the `mydb` database when you import the data. + + ```sql + CREATE DATABASE mydb; + ``` + + 2. Create table schema files for your source data. + + If you do not include the table schema files in the Amazon S3 or GCS directory where the Parquet files are located, TiDB Cloud will not create the corresponding tables for you when you import the data. + + Each table schema file must be in the `${db_name}.${table_name}-schema.sql` format and contain a `CREATE TABLE` DDL statement. With this file, TiDB Cloud will create the `${db_table}` table in the `${db_name}` database when you import the data. + + For example, if you create a `mydb.mytable-schema.sql` file that contains the following statement, TiDB Cloud will create the `mytable` table in the `mydb` database when you import the data. + + ```sql + CREATE TABLE mytable ( + ID INT, + REGION VARCHAR(20), + COUNT INT ); + ``` + + > **Note:** + > + > Each `${db_name}.${table_name}-schema.sql` file should only contain a single DDL statement. If the file contains multiple DDL statements, only the first one takes effect. + +## Step 3. Configure cross-account access + +To allow TiDB Cloud to access the Parquet files in the Amazon S3 or GCS bucket, do one of the following: + +- If your Parquet files are located in Amazon S3, [configure Amazon S3 access](/tidb-cloud/config-s3-and-gcs-access.md#configure-amazon-s3-access). + + You can use either an AWS access key or a Role ARN to access your bucket. Once finished, make a note of the access key (including the access key ID and secret access key) or the Role ARN value as you will need it in [Step 4](#step-4-import-parquet-files-to-tidb-cloud). + +- If your Parquet files are located in GCS, [configure GCS access](/tidb-cloud/config-s3-and-gcs-access.md#configure-gcs-access). + +## Step 4. Import Parquet files to TiDB Cloud + +To import the Parquet files to TiDB Cloud, take the following steps: + + +
+ +1. Open the **Import** page for your target cluster. + + 1. Log in to the [TiDB Cloud console](https://tidbcloud.com/) and navigate to the [**Clusters**](https://tidbcloud.com/console/clusters) page of your project. + + > **Tip:** + > + > If you have multiple projects, you can click in the lower-left corner and switch to another project. + + 2. Click the name of your target cluster to go to its overview page, and then click **Import** in the left navigation pane. + +2. Select **Import data from S3**. + + If this is your first time importing data into this cluster, select **Import From Amazon S3**. + +3. On the **Import Data from Amazon S3** page, provide the following information for the source Parquet files: + + - **Import File Count**: select **One file** or **Multiple files** as needed. + - **Included Schema Files**: this field is only visible when importing multiple files. If the source folder contains the target table schemas, select **Yes**. Otherwise, select **No**. + - **Data Format**: select **Parquet**. + - **File URI** or **Folder URI**: + - When importing one file, enter the source file URI and name in the following format `s3://[bucket_name]/[data_source_folder]/[file_name].parquet`. For example, `s3://sampledata/ingest/TableName.01.parquet`. + - When importing multiple files, enter the source file URI and name in the following format `s3://[bucket_name]/[data_source_folder]/`. For example, `s3://sampledata/ingest/`. + - **Bucket Access**: you can use either an AWS Role ARN or an AWS access key to access your bucket. For more information, see [Configure Amazon S3 access](/tidb-cloud/config-s3-and-gcs-access.md#configure-amazon-s3-access). + - **AWS Role ARN**: enter the AWS Role ARN value. + - **AWS Access Key**: enter the AWS access key ID and AWS secret access key. + +4. Click **Connect**. + +5. In the **Destination** section, select the target database and table. + + When importing multiple files, you can use **Advanced Settings** > **Mapping Settings** to define a custom mapping rule for each target table and its corresponding Parquet file. After that, the data source files will be re-scanned using the provided custom mapping rule. + + When you enter the source file URI and name in **Source File URIs and Names**, make sure it is in the following format `s3://[bucket_name]/[data_source_folder]/[file_name].parquet`. For example, `s3://sampledata/ingest/TableName.01.parquet`. + + You can also use wildcards to match the source files. For example: + + - `s3://[bucket_name]/[data_source_folder]/my-data?.parquet`: all Parquet files starting with `my-data` followed by one character (such as `my-data1.parquet` and `my-data2.parquet`) in that folder will be imported into the same target table. + + - `s3://[bucket_name]/[data_source_folder]/my-data*.parquet`: all Parquet files in the folder starting with `my-data` will be imported into the same target table. + + Note that only `?` and `*` are supported. + + > **Note:** + > + > The URI must contain the data source folder. + +6. Click **Start Import**. + +7. When the import progress shows **Completed**, check the imported tables. + +
+ +
+ +1. Open the **Import** page for your target cluster. + + 1. Log in to the [TiDB Cloud console](https://tidbcloud.com/) and navigate to the [**Clusters**](https://tidbcloud.com/console/clusters) page of your project. + + > **Tip:** + > + > If you have multiple projects, you can click in the lower-left corner and switch to another project. + + 2. Click the name of your target cluster to go to its overview page, and then click **Import** in the left navigation pane. + +2. Click **Import Data** in the upper-right corner. + + If this is your first time importing data into this cluster, select **Import From GCS**. + +3. On the **Import Data from GCS** page, provide the following information for the source Parquet files: + + - **Import File Count**: select **One file** or **Multiple files** as needed. + - **Included Schema Files**: this field is only visible when importing multiple files. If the source folder contains the target table schemas, select **Yes**. Otherwise, select **No**. + - **Data Format**: select **Parquet**. + - **File URI** or **Folder URI**: + - When importing one file, enter the source file URI and name in the following format `gs://[bucket_name]/[data_source_folder]/[file_name].parquet`. For example, `gs://sampledata/ingest/TableName.01.parquet`. + - When importing multiple files, enter the source file URI and name in the following format `gs://[bucket_name]/[data_source_folder]/`. For example, `gs://sampledata/ingest/`. + - **Bucket Access**: you can use a GCS IAM Role to access your bucket. For more information, see [Configure GCS access](/tidb-cloud/config-s3-and-gcs-access.md#configure-gcs-access). + +4. Click **Connect**. + +5. In the **Destination** section, select the target database and table. + + When importing multiple files, you can use **Advanced Settings** > **Mapping Settings** to define a custom mapping rule for each target table and its corresponding Parquet file. After that, the data source files will be re-scanned using the provided custom mapping rule. + + When you enter the source file URI and name in **Source File URIs and Names**, make sure it is in the following format `gs://[bucket_name]/[data_source_folder]/[file_name].parquet`. For example, `gs://sampledata/ingest/TableName.01.parquet`. + + You can also use wildcards to match the source files. For example: + + - `gs://[bucket_name]/[data_source_folder]/my-data?.parquet`: all Parquet files starting with `my-data` followed by one character (such as `my-data1.parquet` and `my-data2.parquet`) in that folder will be imported into the same target table. + + - `gs://[bucket_name]/[data_source_folder]/my-data*.parquet`: all Parquet files in the folder starting with `my-data` will be imported into the same target table. + + Note that only `?` and `*` are supported. + + > **Note:** + > + > The URI must contain the data source folder. + +6. Click **Start Import**. + +7. When the import progress shows **Completed**, check the imported tables. + +
+ +
+ +When you run an import task, if any unsupported or invalid conversions are detected, TiDB Cloud terminates the import job automatically and reports an importing error. + +If you get an importing error, do the following: + +1. Drop the partially imported table. +2. Check the table schema file. If there are any errors, correct the table schema file. +3. Check the data types in the Parquet files. + + If the Parquet files contain any unsupported data types (for example, `NEST STRUCT`, `ARRAY`, or `MAP`), you need to regenerate the Parquet files using [supported data types](#supported-data-types) (for example, `STRING`). + +4. Try the import task again. + +## Supported data types + +The following table lists the supported Parquet data types that can be imported to TiDB Cloud. + +| Parquet Primitive Type | Parquet Logical Type | Types in TiDB or MySQL | +|---|---|---| +| DOUBLE | DOUBLE | DOUBLE
FLOAT | +| FIXED_LEN_BYTE_ARRAY(9) | DECIMAL(20,0) | BIGINT UNSIGNED | +| FIXED_LEN_BYTE_ARRAY(N) | DECIMAL(p,s) | DECIMAL
NUMERIC | +| INT32 | DECIMAL(p,s) | DECIMAL
NUMERIC | +| INT32 | N/A | INT
MEDIUMINT
YEAR | +| INT64 | DECIMAL(p,s) | DECIMAL
NUMERIC | +| INT64 | N/A | BIGINT
INT UNSIGNED
MEDIUMINT UNSIGNED | +| INT64 | TIMESTAMP_MICROS | DATETIME
TIMESTAMP | +| BYTE_ARRAY | N/A | BINARY
BIT
BLOB
CHAR
LINESTRING
LONGBLOB
MEDIUMBLOB
MULTILINESTRING
TINYBLOB
VARBINARY | +| BYTE_ARRAY | STRING | ENUM
DATE
DECIMAL
GEOMETRY
GEOMETRYCOLLECTION
JSON
LONGTEXT
MEDIUMTEXT
MULTIPOINT
MULTIPOLYGON
NUMERIC
POINT
POLYGON
SET
TEXT
TIME
TINYTEXT
VARCHAR | +| SMALLINT | N/A | INT32 | +| SMALLINT UNSIGNED | N/A | INT32 | +| TINYINT | N/A | INT32 | +| TINYINT UNSIGNED | N/A | INT32 | + +## Troubleshooting + +### Resolve warnings during data import + +After clicking **Start Import**, if you see a warning message such as `can't find the corresponding source files`, resolve this by providing the correct source file, renaming the existing one according to [Naming Conventions for Data Import](/tidb-cloud/naming-conventions-for-data-import.md), or using **Advanced Settings** to make changes. + +After resolving these issues, you need to import the data again. + +### Zero rows in the imported tables + +After the import progress shows **Completed**, check the imported tables. If the number of rows is zero, it means no data files matched the Bucket URI that you entered. In this case, resolve this issue by providing the correct source file, renaming the existing one according to [Naming Conventions for Data Import](/tidb-cloud/naming-conventions-for-data-import.md), or using **Advanced Settings** to make changes. After that, import those tables again. diff --git a/markdown-pages/en/tidbcloud/master/tidb-cloud/import-sample-data.md b/markdown-pages/en/tidbcloud/master/tidb-cloud/import-sample-data.md new file mode 100644 index 0000000..10a4305 --- /dev/null +++ b/markdown-pages/en/tidbcloud/master/tidb-cloud/import-sample-data.md @@ -0,0 +1,127 @@ +--- +title: Import Sample Data +summary: Learn how to import sample data into TiDB Cloud via UI. +--- + +# Import Sample Data + +This document describes how to import the sample data into TiDB Cloud via the UI. The sample data used is the system data from Capital Bikeshare, released under the Capital Bikeshare Data License Agreement. Before importing the sample data, you need to have one TiDB cluster. + + +
+ +1. Open the **Import** page for your target cluster. + + 1. Log in to the [TiDB Cloud console](https://tidbcloud.com/) and navigate to the [**Clusters**](https://tidbcloud.com/console/clusters) page of your project. + + > **Tip:** + > + > If you have multiple projects, you can click in the lower-left corner and switch to another project. + + 2. Click the name of your target cluster to go to its overview page, and then click **Import** in the left navigation pane. + +2. Select **Import data from S3**. + + If this is your first time importing data into this cluster, select **Import From Amazon S3**. + +3. On the **Import Data from Amazon S3** page, configure the following source data information: + + - **Import File Count**: for the sample data, select **Multiple files**. + - **Included Schema Files**: for the sample data, select **Yes**. + - **Data Format**: select **SQL**. + - **Folder URI** or **File URI**: enter the sample data URI `s3://tidbcloud-sample-data/data-ingestion/`. + - **Bucket Access**: for the sample data, you can only use a Role ARN to access its bucket. For your own data, you can use either an AWS access key or a Role ARN to access your bucket. + - **AWS Role ARN**: enter `arn:aws:iam::801626783489:role/import-sample-access`. + - **AWS Access Key**: skip this option for the sample data. + +4. Click **Connect** > **Start Import**. + +
+
+ +1. Open the **Import** page for your target cluster. + + 1. Log in to the [TiDB Cloud console](https://tidbcloud.com/) and navigate to the [**Clusters**](https://tidbcloud.com/console/clusters) page of your project. + + > **Tip:** + > + > If you have multiple projects, you can click in the lower-left corner and switch to another project. + + 2. Click the name of your target cluster to go to its overview page, and then click **Import** in the left navigation pane. + +2. Click **Import Data** in the upper-right corner. + + If this is your first time importing data into this cluster, select **Import From GCS**. + +3. On the **Import Data from GCS** page, configure the following source data information: + + - **Import File Count**: for the sample data, select **Multiple files**. + - **Included Schema Files**: for the sample data, select **Yes**. + - **Data Format**: select **SQL**. + - **Folder URI** or **File URI**: enter the sample data URI `gs://tidbcloud-samples-us-west1/`. + - **Bucket Access**: you can use a GCS IAM Role to access your bucket. For more information, see [Configure GCS access](/tidb-cloud/config-s3-and-gcs-access.md#configure-gcs-access). + + If the region of the bucket is different from your cluster, confirm the compliance of cross region. + +4. Click **Connect** > **Start Import**. + +
+
+ +When the data import progress shows **Completed**, you have successfully imported the sample data and the database schema to your database in TiDB Cloud. + +After connecting to the cluster, you can run some queries in your terminal to check the result, for example: + +1. Get the trip records starting at "12th & U St NW": + + ```sql + use bikeshare; + ``` + + ```sql + select * from `trips` where start_station_name='12th & U St NW' limit 10; + ``` + + ```sql + +-----------------+---------------+---------------------+---------------------+--------------------+------------------+-------------------------------------------+----------------+-----------+------------+-----------+------------+---------------+ + | ride_id | rideable_type | started_at | ended_at | start_station_name | start_station_id | end_station_name | end_station_id | start_lat | start_lng | end_lat | end_lng | member_casual | + +-----------------+---------------+---------------------+---------------------+--------------------+------------------+-------------------------------------------+----------------+-----------+------------+-----------+------------+---------------+ + | E291FF5018 | classic_bike | 2021-01-02 11:12:38 | 2021-01-02 11:23:47 | 12th & U St NW | 31268 | 7th & F St NW / National Portrait Gallery | 31232 | 38.916786 | -77.02814 | 38.89728 | -77.022194 | member | + | E76F3605D0 | docked_bike | 2020-09-13 00:44:11 | 2020-09-13 00:59:38 | 12th & U St NW | 31268 | 17th St & Massachusetts Ave NW | 31267 | 38.916786 | -77.02814 | 38.908142 | -77.03836 | casual | + | FFF0B75414 | docked_bike | 2020-09-28 16:47:53 | 2020-09-28 16:57:30 | 12th & U St NW | 31268 | 17th St & Massachusetts Ave NW | 31267 | 38.916786 | -77.02814 | 38.908142 | -77.03836 | casual | + | C3F2C16949 | docked_bike | 2020-09-13 00:42:03 | 2020-09-13 00:59:43 | 12th & U St NW | 31268 | 17th St & Massachusetts Ave NW | 31267 | 38.916786 | -77.02814 | 38.908142 | -77.03836 | casual | + | 1C7EC91629 | docked_bike | 2020-09-28 16:47:49 | 2020-09-28 16:57:26 | 12th & U St NW | 31268 | 17th St & Massachusetts Ave NW | 31267 | 38.916786 | -77.02814 | 38.908142 | -77.03836 | member | + | A3A38BCACA | classic_bike | 2021-01-14 09:52:53 | 2021-01-14 10:00:51 | 12th & U St NW | 31268 | 10th & E St NW | 31256 | 38.916786 | -77.02814 | 38.895912 | -77.02606 | member | + | EC4943257E | electric_bike | 2021-01-28 10:06:52 | 2021-01-28 10:16:28 | 12th & U St NW | 31268 | 10th & E St NW | 31256 | 38.916843 | -77.028206 | 38.89607 | -77.02608 | member | + | D4070FBFA7 | classic_bike | 2021-01-12 09:50:51 | 2021-01-12 09:59:41 | 12th & U St NW | 31268 | 10th & E St NW | 31256 | 38.916786 | -77.02814 | 38.895912 | -77.02606 | member | + | 6EABEF3CAB | classic_bike | 2021-01-09 15:00:43 | 2021-01-09 15:18:30 | 12th & U St NW | 31268 | 1st & M St NE | 31603 | 38.916786 | -77.02814 | 38.905697 | -77.005486 | member | + | 2F5CC89018 | electric_bike | 2021-01-02 01:47:07 | 2021-01-02 01:58:29 | 12th & U St NW | 31268 | 3rd & H St NE | 31616 | 38.916836 | -77.02815 | 38.90074 | -77.00219 | member | + +-----------------+---------------+---------------------+---------------------+--------------------+------------------+-------------------------------------------+----------------+-----------+------------+-----------+------------+---------------+ + ``` + +2. Get the trip records with electric bikes: + + ```sql + use bikeshare; + ``` + + ```sql + select * from `trips` where rideable_type="electric_bike" limit 10; + ``` + + ```sql + +------------------+---------------+---------------------+---------------------+----------------------------------------+------------------+-------------------------------------------------------+----------------+-----------+------------+-----------+------------+---------------+ + | ride_id | rideable_type | started_at | ended_at | start_station_name | start_station_id | end_station_name | end_station_id | start_lat | start_lng | end_lat | end_lng | member_casual | + +------------------+---------------+---------------------+---------------------+----------------------------------------+------------------+-------------------------------------------------------+----------------+-----------+------------+-----------+------------+---------------+ + | AF15B12839DA4367 | electric_bike | 2021-01-23 14:50:46 | 2021-01-23 14:59:55 | Columbus Circle / Union Station | 31623 | 15th & East Capitol St NE | 31630 | 38.8974 | -77.00481 | 38.890 | 76.98354 | member | + | 7173E217338C4752 | electric_bike | 2021-01-15 08:28:38 | 2021-01-15 08:33:49 | 37th & O St NW / Georgetown University | 31236 | 34th St & Wisconsin Ave NW | 31226 | 38.907825 | -77.071655 | 38.916 | -77.0683 | member | + | E665505ED621D1AB | electric_bike | 2021-01-05 13:25:47 | 2021-01-05 13:35:58 | N Lynn St & Fairfax Dr | 31917 | 34th St & Wisconsin Ave NW | 31226 | 38.89359 | -77.07089 | 38.916 | 77.06829 | member | + | 646AFE266A6375AF | electric_bike | 2021-01-16 00:08:10 | 2021-01-16 00:35:58 | 7th St & Massachusetts Ave NE | 31647 | 34th St & Wisconsin Ave NW | 31226 | 38.892235 | -76.996025 | 38.91 | 7.068245 | member | + | 40CDDA0378E45736 | electric_bike | 2021-01-03 11:14:50 | 2021-01-03 11:26:04 | N Lynn St & Fairfax Dr | 31917 | 34th St & Wisconsin Ave NW | 31226 | 38.893734 | -77.07096 | 38.916 | 7.068275 | member | + | E0A7DDB0CE680C01 | electric_bike | 2021-01-05 18:18:17 | 2021-01-05 19:04:11 | Maine Ave & 7th St SW | 31609 | Smithsonian-National Mall / Jefferson Dr & 12th St SW | 31248 | 38.878727 | -77.02304 | 38.8 | 7.028755 | casual | + | 71BDF35029AF0039 | electric_bike | 2021-01-07 10:23:57 | 2021-01-07 10:59:43 | 10th & K St NW | 31263 | East West Hwy & Blair Mill Rd | 32019 | 38.90279 | -77.02633 | 38.990 | 77.02937 | member | + | D5EACDF488260A61 | electric_bike | 2021-01-13 20:57:23 | 2021-01-13 21:04:19 | 8th & H St NE | 31661 | 15th & East Capitol St NE | 31630 | 38.89985 | -76.994835 | 38.88 | 76.98345 | member | + | 211D449363FB7EE3 | electric_bike | 2021-01-15 17:22:02 | 2021-01-15 17:35:49 | 7th & K St NW | 31653 | 15th & East Capitol St NE | 31630 | 38.90216 | -77.0211 | 38.88 | 76.98357 | casual | + | CE667578A7291701 | electric_bike | 2021-01-15 16:55:12 | 2021-01-15 17:38:26 | East West Hwy & 16th St | 32056 | East West Hwy & Blair Mill Rd | 32019 | 38.995674 | -77.03868 | 38.990 | 77.02953 | casual | + +------------------+---------------+---------------------+---------------------+----------------------------------------+------------------+-------------------------------------------------------+----------------+-----------+------------+-----------+------------+---------------+ + ``` diff --git a/markdown-pages/en/tidbcloud/master/tidb-cloud/tidb-cloud-import-local-files.md b/markdown-pages/en/tidbcloud/master/tidb-cloud/tidb-cloud-import-local-files.md new file mode 100644 index 0000000..7b751b6 --- /dev/null +++ b/markdown-pages/en/tidbcloud/master/tidb-cloud/tidb-cloud-import-local-files.md @@ -0,0 +1,140 @@ +--- +title: Import Local Files to TiDB Cloud Serverless +summary: Learn how to import local files to TiDB Cloud Serverless. +--- + +# Import Local Files to TiDB Cloud Serverless + +You can import local files to TiDB Cloud Serverless directly. It only takes a few clicks to complete the task configuration, and then your local CSV data will be quickly imported to your TiDB cluster. Using this method, you do not need to provide the cloud storage and credentials. The whole importing process is quick and smooth. + +Currently, this method supports importing one CSV file for one task into either an existing empty table or a new table. + +## Limitations + +- Currently, TiDB Cloud only supports importing a local file in CSV format within 250 MiB for one task. +- Importing local files is supported only for TiDB Cloud Serverless clusters, not for TiDB Cloud Dedicated clusters. +- You cannot run more than one import task at the same time. + +## Import local files + +1. Open the **Import** page for your target cluster. + + 1. Log in to the [TiDB Cloud console](https://tidbcloud.com/) and navigate to the [**Clusters**](https://tidbcloud.com/console/clusters) page of your project. + + > **Tip:** + > + > If you have multiple projects, you can click in the lower-left corner and switch to another project. + + 2. Click the name of your target cluster to go to its overview page, and then click **Import** in the left navigation pane. + +2. On the **Import** page, you can directly drag and drop your local file to the upload area, or click **Upload a local file** to select and upload the target local file. Note that you can upload only one CSV file of less than 250 MiB for one task. If your local file is larger than 250 MiB, see [How to import a local file larger than 250 MiB?](#how-to-import-a-local-file-larger-than-250-mib). + +3. In the **Destination** section, select the target database and the target table, or enter a name directly to create a new database or a new table. The name must only contain characters in Unicode BMP (Basic Multilingual Plane), excluding the null character `\u0000` and whitespace characters, and can be up to 64 characters in length. Click **Define Table**, the **Table Definition** section is displayed. + +4. Check the table. + + You can see a list of configurable table columns. Each line shows the table column name inferred by TiDB Cloud, the table column type inferred, and the previewed data from the CSV file. + + - If you import data into an existing table in TiDB Cloud, the column list is extracted from the table definition, and the previewed data is mapped to the corresponding columns by column names. + + - If you want to create a new table, the column list is extracted from the CSV file, and the column type is inferred by TiDB Cloud. For example, if the previewed data is all integers, the inferred column type will be integer. + +5. Configure the column names and data types. + + If the first row in the CSV file records the column names, make sure that **Use first row as column name** is selected, which is selected by default. + + If the CSV file does not have a row for the column names, do not select **Use first row as column name**. In this case: + + - If the target table already exists, the columns in the CSV file will be imported into the target table in order. Extra columns will be truncated and missing columns will be filled with default values. + + - If you need TiDB Cloud to create the target table, input the name for each column. The column name must meet the following requirements: + + * The name must be composed of characters in Unicode BMP, excluding the null character `\u0000` and whitespace characters. + * The length of the name must be less than 65 characters. + + You can also change the data type if needed. + + > **Note:** + > + > When you import a CSV file into an existing table in TiDB Cloud and the target table has more columns than the source file, the extra columns are handled differently depending on the situation: + > - If the extra columns are not the primary keys or the unique keys, no error will be reported. Instead, these extra columns will be populated with their [default values](/data-type-default-values.md). + > - If the extra columns are the primary keys or the unique keys and do not have the `auto_increment` or `auto_random` attribute, an error will be reported. In that case, it is recommended that you choose one of the following strategies: + > - Provide a source file that includes these the primary keys or the unique keys columns. + > - Modify the target table's PK/UK columns to match the existing columns in the source file. + > - Set the attributes of the primary key or the unique key columns to `auto_increment` or `auto_random`. + +6. For a new target table, you can set the primary key. You can select a column as the primary key, or select multiple columns to create a composite primary key. The composite primary key will be formed in the order in which you select the column names. + + > **Note:** + > + > The primary key of the table is a clustered index and cannot be dropped after creation. + +7. Edit the CSV configuration if needed. + + You can also click **Edit CSV configuration** to configure Backslash Escape, Separator, and Delimiter for more fine-grained control. For more information about the CSV configuration, see [CSV Configurations for Importing Data](/tidb-cloud/csv-config-for-import-data.md). + +8. Click **Start Import**. + + You can view the import progress on the **Import Task Detail** page. If there are warnings or failed tasks, you can check to view the details and solve them. + +9. After the import task is completed, you can click **Explore your data by SQL Editor** to query your imported data. For more information about how to use SQL Editor, see [Explore your data with AI-assisted SQL Editor](/tidb-cloud/explore-data-with-chat2query.md). + +10. On the **Import** page, you can click **...** > **View** in the **Action** column to check the import task detail. + +## FAQ + +### Can I only import some specified columns by the Import feature in TiDB Cloud? + +No. Currently, you can only import all columns of a CSV file into an existing table when using the Import feature. + +To import only some specified columns, you can use the MySQL client to connect your TiDB cluster, and then use [`LOAD DATA`](https://docs.pingcap.com/tidb/stable/sql-statement-load-data) to specify the columns to be imported. For example: + +```sql +CREATE TABLE `import_test` ( + `id` int(11) NOT NULL AUTO_INCREMENT, + `name` varchar(64) NOT NULL, + `address` varchar(64) NOT NULL, + PRIMARY KEY (`id`) +) ENGINE=InnoDB; +LOAD DATA LOCAL INFILE 'load.txt' INTO TABLE import_test FIELDS TERMINATED BY ',' (name, address); +``` + +If you use `mysql` and encounter `ERROR 2068 (HY000): LOAD DATA LOCAL INFILE file request rejected due to restrictions on access.`, you can add `--local-infile=true` in the connection string. + +### Why can't I query a column with a reserved keyword after importing data into TiDB Cloud? + +If a column name is a reserved [keyword](/keywords.md) in TiDB, when you query the column, you need to add backticks `` ` `` to enclose the column name. For example, if the column name is `order`, you need to query the column with `` `order` ``. + +### How to import a local file larger than 250 MiB? + +If the file is larger than 250 MiB, you can use [TiDB Cloud CLI](/tidb-cloud/get-started-with-cli.md) to import the file. For more information, see [`ticloud serverless import start`](/tidb-cloud/ticloud-import-start.md). + +Alternatively, you can use the `split [-l ${line_count}]` utility to split it into multiple smaller files (for Linux or macOS only). For example, run `split -l 100000 tidb-01.csv small_files` to split a file named `tidb-01.csv` by line length `100000`, and the split files are named `small_files${suffix}`. Then, you can import these smaller files to TiDB Cloud one by one. + +Refer to the following script: + +```bash +#!/bin/bash +n=$1 +file_path=$2 +file_extension="${file_path##*.}" +file_name="${file_path%.*}" +total_lines=$(wc -l < $file_path) +lines_per_file=$(( (total_lines + n - 1) / n )) +split -d -a 1 -l $lines_per_file $file_path $file_name. +for (( i=0; i<$n; i++ )) +do + mv $file_name.$i $file_name.$i.$file_extension +done +``` + +You can input `n` and a file name, and then run the script. The script will divide the file into `n` equal parts while keeping the original file extension. For example: + +```bash +> sh ./split.sh 3 mytest.customer.csv +> ls -h | grep mytest +mytest.customer.0.csv +mytest.customer.1.csv +mytest.customer.2.csv +mytest.customer.csv +```