Skip to content

Commit 69f83a6

Browse files
authored
Merge pull request #9134 from soyeric128/8000
docs: port 8081 to 8000
2 parents a6062ff + 66d6d00 commit 69f83a6

File tree

6 files changed

+12
-12
lines changed

6 files changed

+12
-12
lines changed

docs/doc/10-deploy/07-query/10-query-config.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -271,7 +271,7 @@ clickhouse_handler_port = 9001
271271

272272
# Query HTTP Handler.
273273
http_handler_host = "0.0.0.0"
274-
http_handler_port = 8081
274+
http_handler_port = 8000
275275

276276
tenant_id = "tenant1"
277277
cluster_id = "cluster1"

docs/doc/12-load-data/00-stage.md

Lines changed: 6 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -66,17 +66,17 @@ Upload `books.csv` into stages:
6666
```shell title='Request /v1/upload_to_stage' API
6767
curl -H "stage_name:my_int_stage"\
6868
-F "upload=@./books.csv"\
69-
-XPUT http://root:@localhost:8081/v1/upload_to_stage
69+
-XPUT http://root:@localhost:8000/v1/upload_to_stage
7070
```
7171

7272
```text title='Response'
7373
{"id":"50880048-f397-4d32-994c-ce3d38af430f","stage_name":"my_int_stage","state":"SUCCESS","files":["books.csv"]}
7474
```
7575

7676
:::tip
77-
* http://127.0.0.1:8081/v1/upload_to_stage
77+
* http://127.0.0.1:8000/v1/upload_to_stage
7878
* `127.0.0.1` is `http_handler_host` value in your *databend-query.toml*
79-
* `8081` is `http_handler_port` value in your *databend-query.toml*
79+
* `8000` is `http_handler_port` value in your *databend-query.toml*
8080

8181
* -F \"upload=@./books.csv\"
8282
* Your books.csv file location
@@ -91,17 +91,17 @@ Upload `books.parquet` into stages:
9191
```shell title='Request /v1/upload_to_stage' API
9292
curl -H "stage_name:my_int_stage"\
9393
-F "upload=@./books.parquet"\
94-
-XPUT http://root:@localhost:8081/v1/upload_to_stage
94+
-XPUT http://root:@localhost:8000/v1/upload_to_stage
9595
```
9696

9797
```text title='Response'
9898
{"id":"50880048-f397-4d32-994c-ce3d38af430f","stage_name":"my_int_stage","state":"SUCCESS","files":["books.parquet"]}
9999
```
100100

101101
:::tip
102-
* http://127.0.0.1:8081/v1/upload_to_stage
102+
* http://127.0.0.1:8000/v1/upload_to_stage
103103
* `127.0.0.1` is `http_handler_host` value in your *databend-query.toml*
104-
* `8081` is `http_handler_port` value in your *databend-query.toml*
104+
* `8000` is `http_handler_port` value in your *databend-query.toml*
105105

106106
* -F \"upload=@./books.parquet\"
107107
* Your books.csv file location

docs/doc/12-load-data/02-local.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -43,7 +43,7 @@ CREATE TABLE books
4343
Create and send the API request with the following scripts:
4444

4545
```bash
46-
curl -XPUT 'http://root:@127.0.0.1:8081/v1/streaming_load' -H 'insert_sql: insert into book_db.books format CSV' -H 'skip_header: 0' -H 'field_delimiter: ,' -H 'record_delimiter: \n' -F 'upload=@"./books.csv"'
46+
curl -XPUT 'http://root:@127.0.0.1:8000/v1/streaming_load' -H 'insert_sql: insert into book_db.books format CSV' -H 'skip_header: 0' -H 'field_delimiter: ,' -H 'record_delimiter: \n' -F 'upload=@"./books.csv"'
4747
```
4848

4949
Response Example:
@@ -101,7 +101,7 @@ CREATE TABLE bookcomments
101101
Create and send the API request with the following scripts:
102102

103103
```bash
104-
curl -XPUT 'http://root:@127.0.0.1:8081/v1/streaming_load' -H 'insert_sql: insert into book_db.bookcomments(title,author,date)format CSV' -H 'skip_header: 0' -H 'field_delimiter: ,' -H 'record_delimiter: \n' -F 'upload=@"./books.csv"'
104+
curl -XPUT 'http://root:@127.0.0.1:8000/v1/streaming_load' -H 'insert_sql: insert into book_db.bookcomments(title,author,date)format CSV' -H 'skip_header: 0' -H 'field_delimiter: ,' -H 'record_delimiter: \n' -F 'upload=@"./books.csv"'
105105
```
106106

107107
Notice that the `insert_sql` part above specifies the columns (title, author, and date) to match the loaded data.

docs/doc/13-sql-reference/70-system-tables/system-tracing.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -9,7 +9,7 @@ SELECT * FROM system.tracing LIMIT 1\G
99
*************************** 1. row ***************************
1010
v: 0
1111
name: databend-query-test_cluster@0.0.0.0:3307
12-
msg: Config { config_file: "scripts/ci/deploy/config/databend-query-node-1.toml", query: QueryConfig { tenant_id: "test_tenant", cluster_id: "test_cluster", num_cpus: 10, mysql_handler_host: "0.0.0.0", mysql_handler_port: 3307, max_active_sessions: 256, max_memory_usage: 0, clickhouse_handler_host: "0.0.0.0", clickhouse_handler_port: 9001, clickhouse_http_handler_host: "0.0.0.0", clickhouse_http_handler_port: 8125, http_handler_host: "0.0.0.0", http_handler_port: 8001, http_handler_result_timeout_millis: 10000, flight_api_address: "0.0.0.0:9091", admin_api_address: "0.0.0.0:8081", metric_api_address: "0.0.0.0:7071", http_handler_tls_server_cert: "", http_handler_tls_server_key: "", http_handler_tls_server_root_ca_cert: "", api_tls_server_cert: "", api_tls_server_key: "", api_tls_server_root_ca_cert: "", rpc_tls_server_cert: "", rpc_tls_server_key: "", rpc_tls_query_server_root_ca_cert: "", rpc_tls_query_service_domain_name: "localhost", table_engine_memory_enabled: true, database_engine_github_enabled: true, wait_timeout_mills: 5000, max_query_log_size: 10000, table_cache_enabled: true, table_cache_snapshot_count: 256, table_cache_segment_count: 10240, table_cache_block_meta_count: 102400, table_memory_cache_mb_size: 1024, table_disk_cache_root: "_cache", table_disk_cache_mb_size: 10240, management_mode: false, jwt_key_file: "" }, log: LogConfig { log_level: "INFO", log_dir: "./_logs", log_query_enabled: false }, meta: {meta_address: "0.0.0.0:9191", meta_user: "root", meta_password: "******"}, storage: StorageConfig { storage_type: "disk", storage_num_cpus: 0, disk: FsStorageConfig { data_path: "stateless_test_data", temp_data_path: "" }, s3: {s3.storage.region: "", s3.storage.endpoint_url: "https://s3.amazonaws.com", s3.storage.bucket: "", s3.storage.access_key_id: "", s3.storage.secret_access_key: "", }, azure_storage_blob: {Azure.storage.container: "", } } }
12+
msg: Config { config_file: "scripts/ci/deploy/config/databend-query-node-1.toml", query: QueryConfig { tenant_id: "test_tenant", cluster_id: "test_cluster", num_cpus: 10, mysql_handler_host: "0.0.0.0", mysql_handler_port: 3307, max_active_sessions: 256, max_memory_usage: 0, clickhouse_handler_host: "0.0.0.0", clickhouse_handler_port: 9001, clickhouse_http_handler_host: "0.0.0.0", clickhouse_http_handler_port: 8125, http_handler_host: "0.0.0.0", http_handler_port: 8001, http_handler_result_timeout_millis: 10000, flight_api_address: "0.0.0.0:9091", admin_api_address: "0.0.0.0:8000", metric_api_address: "0.0.0.0:7071", http_handler_tls_server_cert: "", http_handler_tls_server_key: "", http_handler_tls_server_root_ca_cert: "", api_tls_server_cert: "", api_tls_server_key: "", api_tls_server_root_ca_cert: "", rpc_tls_server_cert: "", rpc_tls_server_key: "", rpc_tls_query_server_root_ca_cert: "", rpc_tls_query_service_domain_name: "localhost", table_engine_memory_enabled: true, database_engine_github_enabled: true, wait_timeout_mills: 5000, max_query_log_size: 10000, table_cache_enabled: true, table_cache_snapshot_count: 256, table_cache_segment_count: 10240, table_cache_block_meta_count: 102400, table_memory_cache_mb_size: 1024, table_disk_cache_root: "_cache", table_disk_cache_mb_size: 10240, management_mode: false, jwt_key_file: "" }, log: LogConfig { log_level: "INFO", log_dir: "./_logs", log_query_enabled: false }, meta: {meta_address: "0.0.0.0:9191", meta_user: "root", meta_password: "******"}, storage: StorageConfig { storage_type: "disk", storage_num_cpus: 0, disk: FsStorageConfig { data_path: "stateless_test_data", temp_data_path: "" }, s3: {s3.storage.region: "", s3.storage.endpoint_url: "https://s3.amazonaws.com", s3.storage.bucket: "", s3.storage.access_key_id: "", s3.storage.secret_access_key: "", }, azure_storage_blob: {Azure.storage.container: "", } } }
1313
level: 30
1414
hostname: localhost
1515
pid: 24640

docs/doc/21-use-cases/05-analyze-hits-dataset-with-databend.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -47,7 +47,7 @@ gzip -d hits_1m.csv.gz
4747
```
4848

4949
```shell title='Load CSV files into Databend'
50-
curl -H "insert_sql:insert into hits format TSV" -F "upload=@./hits_1m.tsv" -XPUT http://user1:abc123@127.0.0.1:8081/v1/streaming_load
50+
curl -H "insert_sql:insert into hits format TSV" -F "upload=@./hits_1m.tsv" -XPUT http://user1:abc123@127.0.0.1:8000/v1/streaming_load
5151
```
5252

5353
## Step 3. Queries

docs/doc/90-contributing/03-rfcs/20220704-presign.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -14,7 +14,7 @@ Add a new SQL statement for `PRESIGN`, so users can generate a presigned URL for
1414

1515
Databend supports [loading data](https://databend.rs/doc/load-data) via internal stage:
1616

17-
- Call HTTP API `upload_to_stage` to upload files: `curl -H "stage_name:my_int_stage" -F "upload=@./books.csv" -XPUT http://localhost:8081/v1/upload_to_stage`
17+
- Call HTTP API `upload_to_stage` to upload files: `curl -H "stage_name:my_int_stage" -F "upload=@./books.csv" -XPUT http://localhost:8000/v1/upload_to_stage`
1818
- Call `COPY INTO` to copy data: `COPY INTO books FROM '@my_int_stage'`
1919

2020
This workflow's throughput is limited by databend's HTTP API: `upload_to_stage`. We can improve the throughput by allowing users to upload to our backend storage directly. For example, we can use [AWS Authenticating Requests: Using Query Parameters](https://docs.aws.amazon.com/AmazonS3/latest/API/sigv4-query-string-auth.html) to generate a presigned URL. This way, users upload content to AWS s3 directly without going through the databend.

0 commit comments

Comments
 (0)