Built by Metorial, the integration platform for agentic AI.
Create a new BigQuery routine: user-defined function (UDF), stored procedure, or table-valued function. Routines can be written in SQL or JavaScript.
Export a BigQuery table to Google Cloud Storage as CSV, JSON, or Avro. Creates an asynchronous extract job. Use wildcards in the destination URI for sharded exports of large tables (e.g., gs://bucket/file-*.csv).
Delete a BigQuery dataset. By default, the dataset must be empty. Set **deleteContents** to true to also delete all tables and views within it.
Permanently delete a BigQuery routine (UDF, procedure, or TVF). This action is irreversible.
Retrieve detailed information about a specific BigQuery job including its configuration, status, and execution statistics.
Update a BigQuery table's metadata including friendly name, description, schema (add new columns), expiration, and labels.
Create a new BigQuery dataset. A dataset is a top-level container for tables, views, and routines. Once created, its location cannot be changed.
Create a new BigQuery table, view, or materialized view. Supports defining schemas with nested/repeated fields, time or range partitioning, and clustering. To create a view, provide the **viewQuery** parameter; for a materialized view, provide **materializedViewQuery**.
Copy a BigQuery table to another table, within the same dataset or across datasets and projects. Creates an asynchronous copy job.
List all tables, views, and materialized views in a BigQuery dataset. Returns table IDs, types, creation times, and expiration info.
Stream rows into a BigQuery table using the streaming insert API. Rows are available for querying almost immediately. Each row is a JSON object matching the table schema. Optionally provide an insertId per row for best-effort deduplication.
Load data from Google Cloud Storage into a BigQuery table. Supports CSV, JSON (newline-delimited), Avro, Parquet, ORC, Datastore, and Firestore export formats. Creates an asynchronous load job and returns the job status.
List BigQuery jobs in the project. Jobs include queries, loads, exports, and copy operations. Filter by state, time range, or parent job.
Retrieve detailed information about a specific BigQuery dataset, including its schema, access controls, creation time, and configuration.
Run a GoogleSQL (standard SQL) query against BigQuery. Supports SELECT, DML (INSERT, UPDATE, DELETE, MERGE), and DDL (CREATE, ALTER, DROP) statements. The query is submitted as a job, polled for completion, and results are returned. Parameterized queries are supported for safe value interpolation. You can optionally write results to a destination table.
List user-defined functions (UDFs), stored procedures, and table-valued functions in a BigQuery dataset.
Retrieve detailed information about a specific BigQuery routine including its definition, arguments, return type, and language.
List all datasets in the configured BigQuery project. Returns dataset IDs, friendly names, locations, and labels. Use the **filter** parameter to narrow results.
Retrieve detailed metadata for a BigQuery table, including its schema, row count, size, partitioning configuration, and clustering settings.
Permanently delete a BigQuery table or view. This action is irreversible.
Cancel a running BigQuery job. The cancellation is best-effort; the job may still complete before the cancellation takes effect.
Read rows directly from a BigQuery table without running a query job. Useful for quickly inspecting table contents. For complex filtering or aggregation, use **Execute SQL Query** instead.
Update an existing BigQuery dataset's metadata, including its friendly name, description, labels, and default expiration settings.