An Introduction to BigQuery Data Types with Examples

Running an online shop on a global marketplace like Amazon or eBay means an ongoing flow of various data streams from five continents, all into one analytics pipeline. This can easily lead to information chaos — marketers want up-to-date ROI figures now, finance wants purchase and revenue figures, while executives demand clean numbers for tomorrow’s board meeting.
BigQuery’s serverless capacity and the ability to aggregate and manipulate data can make all stakeholders happy in minutes.
Throughout this guide, we’ll recreate similar high-pressure moments with sample datasets and hands-on SQL. You’ll learn about string functions, boolean fields for conditional logic, and many other essential and advanced BigQuery data types. Tips and reference sheets for choosing the optimal data type for your task will also be included.
So, stay focused, we begin an exciting journey into BigQuery data types.
What BigQuery data types are and why they matter
With BQ (short for BigQuery), users bring data and write it into the database in SQL (Structured Query Language). The more data you bring, the richer the results for your business.
Performance, though, depends on how well you describe each data field. Typed columns compress better, filter faster, and make joins less brittle.
For many, it helps to think of BR data types as labels, e.g., “this value is a date,” “this one is money,” while “this one is a flag.” The data warehouse (which is how BQ is often called) can keep those labels only if you declare them properly.
Let’s map out the essential data types at a glance:
- Boolean data type is used for true/false logic, sleek and unambiguous.
- Integers are used for counts, perfect for joins and indexes.
- Decimals (NUMERIC, BIGNUMERIC) are the preferred type for currencies, helping with conversion and rounding.
- Specialty types (GEOGRAPHY, JSON) are commonly used for niche but powerful use cases.
Further down the article, we’ll weave each concept into a runnable SQL prompt and highlight best practices, as well as mistakes that are easy to fix.
Once you understand the catalogue of BigQuery data types, storage tuning and query speed-ups will become straightforward.
Overview of primitive data types in BigQuery
When you open a table in Google Cloud, you immediately see titles, but the rules beneath them. First come the so-called primitive rules. Despite the name, they fulfill important functions, i.e., every column’s primitive type tells the database engine how to compress, scan, and join the data.
In other words, primitive types are the building blocks that support more complex structures. They cover everyday needs like counting orders, flagging active users, converting time zones for global marketplaces, and storing prices in any currency of the world.
Below is a quick reference list of primitive data types. Keep it at your fingertips for later, as we’ll dive deeper into many of these values:
- DATE, TIME, and TIMESTAMP — calendar dates, clock times, and points in universal time.
- NUMERIC and BIGNUMERIC — exact decimals, perfect for currency and tax rates.
- INT64 — whole numbers for counts, IDs, and anything that never needs a decimal.
- FLOAT64 — represents approximate numbers; handy for scientific metrics but less ideal for money.
- BOOLEAN — true/false flags, clean and tiny.
- STRING — UTF-8 text, from names to URLs.
- BYTES — raw binary, often used for hashed values.
Each primitive data type has a specific strength. An event log might mix an INT64 identifier with a boolean status field and a float temperature reading.
Virtually, in every business and marketing domain, these primitive data types play a role.
Here is an example of mapping for link analytics for SEO:
BOOLEAN is_guest_post, NUMERIC domain_authority, STRING rel_attribute, and TIMESTAMP last_crawled. Add STRING anchor_text and DATE first_seen to capture context and discovery date, plus a STRING source_url to join with rankings or crawl logs.
This schema will keep your SEO dashboards tidy: segment guest posts vs. editorial, compare link quality over time, and set freshness alerts (e.g., days since last_crawled).
BQ also rewards good typing on the billing front. Smaller, well-defined columns shave scan bytes, translating directly into lower costs.
So before you run that first SELECT *, pause and confirm that integers aren’t masquerading as strings and that money isn’t hiding in a float column.
Working with date and time types
Working with dates and times in BQ means choosing the field type that matches your question. Google’s cloud data warehouse offers four core types — DATE, TIME, DATETIME, and TIMESTAMP — each tailored for a different slice of time. Picking the right one up front prevents confusing results and extra casts down the road.
As mentioned earlier, BQ utilizes SQL language to manipulate data. SQL stands for Structured Query Language, and it is historically used to ask precise questions about one’s data in various databases.
Source: Medium
IT and database specialists use SQL as the standard way to read, filter, and group information in Google’s cloud data warehouse, so getting comfortable with its syntax pays off immediately.
Without further ado, let’s dive into the basic data and time data entry. Use the DATE type when you only care about the calendar day. For example:
SELECT COUNT(*)
FROM sales.orders
WHERE order_date BETWEEN '2025-01-01' AND '2025-01-31';
This query gives you the January order volume without worrying about hours or time zones.
When you need exact moments across regions, TIMESTAMP is your go-to. For instance:
SELECT
event_id,
TIMESTAMP_DIFF(next_event, event_time, SECOND) AS gap_seconds
FROM analytics.events;
Here, you calculate the interval between two events in seconds, and BQ handles all UTC conversions automatically.
If you are doing guest blogging, consider using DATE first_seen and TIMESTAMP last_crawled to chart how your guest posting strategies accumulate backlinks over time and when they were last verified.
For local scheduling, like office hours or shift times, the TIME and DATETIME types shine. TIME stores clock values without dates, while DATETIME combines date and clock but ignores time zones.
Understanding these distinctions will keep your queries fast and your results reliable.
Using numeric and bignumeric for high-precision data
Floating-point numbers are handy for quick averages, but they start to wobble when you add up thousands of invoices or trace cryptocurrency prices down to the eighth decimal.
That’s where numeric and its beefier cousin, bignumeric, come in. Both types store exact decimal values, so a rounding slip never sneaks into your profit report.
In BQ, a numeric field supports 38 total digits, nine of them after the decimal. Bignumeric doubles down with 76 digits total and up to 38 after the point — plenty for scientific constants or blockchain token balances.
Knowing which one to pick is like choosing a microscope lens: zoom only as far as you actually need.
Let’s take a look at how this works with real currency data:
SELECT
customer_id,
SUM(amount) AS total_spent
FROM finance.invoices
WHERE invoice_date BETWEEN '2025-01-01' AND '2025-12-31'
GROUP BY customer_id;
Here, the amount is a numeric column. The sum stays precise to the cent — no unexpected pennies appear or vanish.
Now let’s see how this helps with high-precision price tracking. Say you're analyzing small price changes over time — here’s how you’d do it:
SELECT
ROUND(AVG(price), 8) AS avg_price_last_hour
FROM crypto.trades
WHERE trade_ts > TIMESTAMP_SUB(CURRENT_TIMESTAMP(), INTERVAL 1 HOUR);
The price column lives in bignumeric, letting you average values carried out to eight decimal places without losing fidelity.
Choose numeric for everyday finance, tax rates, or interest calculations. Reach for bignumeric when you need more room — think scientific research data or fractional crypto tokens. Both compress nearly as well as integers, so storage costs stay reasonable in Google’s cloud data warehouse.
A gentle warning: avoid casting floats into these exact types at the last minute; you could cement earlier rounding glitches. Instead, ingest data directly into numeric formats whenever possible.
Among the many BigQuery data types, numeric and bignumeric are the ones that give accountants, scientists, and crypto analysts the precision they can rely on.
Understanding arrays and structs as complex data types
Arrays and structs let you store related information together instead of scattering it across multiple tables. An array is an ordered list inside one record, while a struct is a miniature record made of named fields.
Source: TheDigitalSky
Google’s cloud data warehouse handles both without breaking row-level compression, which helps keep storage costs under control.
Why bother? Because a single customer may have many tags, or an order may hold several items. Modeling each extra detail as a separate row causes fan-out joins; arrays and structs keep the detail close to the parent row and speed up lookups. They also make JSON-like data feel native in SQL.
Here’s how an array looks in practice:
SELECT user_id,
ARRAY_LENGTH(tags) AS tag_count
FROM marketing.users
WHERE 'vip' IN UNNEST(tags);
In this example, tags is an array of strings, and the query checks whether the list for each row contains “vip.” No self-join, no extra table — just one field with flexible content.
Now a struct example:
SELECT
order_id,
items.product_id,
items.quantity
FROM commerce.orders,
UNNEST(line_items) AS items;
Here, line_items is an array of structs. Each struct has product_id and quantity, letting you explode orders into their individual components without a join.
Keep the following three guidelines in mind:
- First, arrays should group data that’s naturally plural, like phone numbers.
- Second, structs work well for nested attributes — think coordinates or address parts.
- Lastly, always document the element order or struct schema; future you (or your teammate) will thank you.
BQ still enforces type rules inside every array element and struct field, so choose data types carefully.
Treat these complex types as extensions of the relational model, not replacements. Used thoughtfully, arrays and structs will simplify queries and cut down on join gymnastics.
How to work with geography and spatial data
By spatial, we simply mean data with a location — anything you can place on a map. That could be a store’s coordinates, a delivery route, or a city boundary. Spatial questions sound like “which region is this in?” or “what’s the nearest facility?”
Spatial work in BQ starts with the GEOGRAPHY type. It stores shapes — points, lines, and polygons — on the WGS84 spheroid that web maps use. Think “business questions on a map,” answered with SQL.
A quick reminder on construction: ST_GEOGPOINT takes longitude first, then latitude. That’s the most common mistake people make when they first load store lists or check-in logs. Once your coordinates are clean, distance and containment queries are straightforward.
For example, here’s how to find the nearest warehouse for every order location for an e-commerce business:
-- Find the nearest warehouse for each order location
WITH orders AS (
SELECT order_id, ST_GEOGPOINT(lon, lat) AS order_pt FROM ecommerce.orders_today
),
wh AS (
SELECT wh_id, ST_GEOGPOINT(lon, lat) AS wh_pt FROM logistics.warehouses
)
SELECT
o.order_id,
w.wh_id,
ST_DISTANCE(o.order_pt, w.wh_pt) AS meters_away
FROM orders o
JOIN wh w
ON TRUE
QUALIFY ROW_NUMBER() OVER (PARTITION BY o.order_id ORDER BY ST_DISTANCE(o.order_pt, w.wh_pt)) = 1;
This picks the closest warehouse per order using spherical distance in meters. It’s a good backbone for SLA estimates or delivery zone logic.
Another example refers to content marketing outreach campaigns. SQL in BQ lets you label publishers by market so you can target outreach locally:
-- Tag articles as "in-market" if their host domain falls inside a target region
WITH targets AS (
SELECT region_name, region_geo -- polygon/multipolygon GEOGRAPHY
FROM marketing.target_regions
),
sites AS (
SELECT domain, ST_GEOGPOINT(lon, lat) AS site_pt FROM seo.publisher_locations
)
SELECT
s.domain,
t.region_name,
ST_CONTAINS(t.region_geo, s.site_pt) AS in_market
FROM sites s
LEFT JOIN targets t
ON ST_CONTAINS(t.region_geo, s.site_pt);
This pattern helps you evaluate publishers by region before you pitch them. For practical ideas on choosing markets, seegeographic segmentation for local guest posting.
Here are several simple habits that will keep your spatial queries reliable:
- Store coordinates once, as GEOGRAPHY, not as separate floats.
- Build points with ST_GEOGPOINT(longitude, latitude) to avoid swapped axes.
- Keep polygons simplified when possible to reduce computation on ST_CONTAINS.
- Cluster or partition tables by a stable key (e.g., region_id or date) to prune scans.
- Document units: ST_DISTANCE returns meters; convert once, not in every query.
Google’s cloud data warehouse won’t make modeling decisions for you, but it gives you sturdy primitives. Combine clean GEOGRAPHY fields with clear regions, and you can answer “where” questions with the same confidence you answer “what” and “when.”
Using the JSON data type for semi-structured data
Semi-structured data shows up everywhere — webhooks, product catalogs, clickstream logs. Instead of forcing it into rigid columns, BigQuery lets you store the payload in a JSON column and peel out only the fields you need.
It’s flexible, fast to ingest, and perfect when the shape changes over time.
Source: YouTube
A quick mental model: JSON is a container; you extract values on demand. For stable, frequently used attributes, keep a normal typed column next to the payload. That way, you get easy filters and clustering, plus the freedom to keep the rest as JSON.
Let’s see a small query that reads fields from a JSON payload:
-- Pull key fields from a raw events table with a JSON column
SELECT
JSON_VALUE(event, '$.user.id') AS user_id,
SAFE_CAST(JSON_VALUE(event, '$.amount') AS NUMERIC) AS amount,
JSON_VALUE(event, '$.source') AS source,
TIMESTAMP(JSON_VALUE(event, '$.ts')) AS event_ts
FROM raw.events
WHERE JSON_VALUE(event, '$.type') = 'purchase';
Here, we filter purchases and extract typed values when needed. SAFE_CAST line protects you from malformed numbers.
But before you load more JSON into your database, keep these golden rules in mind:
- Store hot fields twice: typed column + JSON, for speed and flexibility.
- Use JSON_VALUE for scalars; JSON_QUERY when you want nested JSON.
- Validate inputs with simple checks; reject obviously broken payloads.
- Prefer UTC timestamps and only convert them to a local time zone when displaying results, not while the query is running.
- Avoid double-encoding — don’t store JSON as quoted text inside JSON.
- Materialize common paths in a view for team reuse.
- Start small and add paths as questions appear.
Google’s cloud data warehouse won’t guess types inside JSON, so be explicit when you cast. As usage stabilizes, promote high-value fields to regular columns and keep the rest in the payload.
This blend will give you flexible ingestion and predictable queries without redesigning tables every sprint.
How to define and use data types in SQL queries
Data types do two jobs at once: they compress your data efficiently and tell SQL what operations make sense.
In Google’s cloud data warehouse, you express that by giving each column an intentional type and sticking with it throughout the pipeline. The payoff is fewer casts, cleaner joins, and predictable results.
Start with a typed table or a typed view. If your source is messy, use a staging query that cleans values — apply string functions to strip whitespace, standardize case, or isolate digits. Then apply SAFE_CAST() to the final type.
That single pass is faster than re-cleaning the same field in every report.
Typed comparisons are also safer. Comparing TIMESTAMP values properly respects time math, while comparing timestamp strings can break around daylight saving changes.
Numbers behave similarly: NUMERIC avoids rounding surprises that show up with FLOAT64 when you sum or average large sets.
Use this checklist when deciding on data types in SQL:
- For every column, write down the business meaning and pick the narrowest type that fits (e.g., BOOL for flags).
- Convert from raw text once — SAFE_CAST() catches bad input without crashing your query.
- Use time constructors (DATE(), TIMESTAMP()) and store UTC; convert for users only at presentation time.
- Keep money and counts apart: NUMERIC for currency, INT64 for counts; avoid mixing them in one field.
- Normalize text with string functions (LOWER, TRIM, REGEXP_REPLACE) before you store it typed.
- Add lightweight constraints in logic (e.g., WHERE amount >= 0) to guard the dataset.
When a model changes, don’t panic. Create a materialized view that exposes old names mapped to new, typed fields, and migrate dashboards gradually. With BQ, views are cheap and let you keep SQL tidy while the schema catches up.
Good typing feels invisible when it works. Your queries read like plain language, scans are smaller, and reports stop drifting because every value is stored in the shape it was meant to be.
Casting and converting between data types
Casting is how you tell SQL, “treat this value as that type.” In BQ, that usually means CAST() or its safer cousin, SAFE_CAST(). Use them to turn text into dates, numbers, booleans, or timestamps.
CAST() will fail the query if a value can’t be converted. SAFE_CAST() returns NULL instead, which lets the query finish and makes bad rows easy to count. In BigQuery, that small choice often decides whether a nightly job completes or stalls.
You’ll meet parsing functions too: PARSE_DATE, PARSE_TIME, and PARSE_TIMESTAMP read formatted strings using a pattern. On the flip side, FORMAT_* functions turn typed values back into strings for export. Together, they bridge messy sources and tidy tables across the whole family of BigQuery data types.
For your convenience, keep this practical checklist nearby when converting:
- Prefer SAFE_CAST() for ingestion and staging. You can isolate bad rows with WHERE SAFE_CAST(x AS INT64) IS NULL AND x IS NOT NULL.
- Parse times with the right tool: PARSE_DATE('%F', d_txt), PARSE_TIMESTAMP('%FT%T%Ez', ts_txt). Store timestamps in UTC and convert only when displaying results.
- Normalize numbers before casting: remove spaces, commas, or currency symbols with REGEXP_REPLACE. Then SAFE_CAST() to NUMERIC or INT64.
- Convert booleans explicitly: SAFE_CAST(LOWER(flag_txt) = 'true' AS BOOL) if inputs vary (Yes, 1, TRUE).
- Don’t cast at every step. Clean once in staging, store typed, and reuse the typed column downstream.
- From JSON, extract first, then cast: SAFE_CAST(JSON_VALUE(payload,'$.amount') AS NUMERIC).
When you need to change a type in production, add a new typed column, backfill with a single INSERT…SELECT that performs the casts, and switch views to the new field. This will keep dashboards stable while the plumbing improves.
Common mistakes with data types and how to fix them
Source: Freepik
Data-type mistakes rarely look like disasters at first. They show up as quiet oddities — totals that drift or filters that miss rows.
However, in BQ, small typing errors grow fast because, with big data, everything scales exponentially.
A good mental model is “type once, trust everywhere.” Decide how each column should be stored, convert it cleanly, and keep it that way. Google’s cloud data warehouse will reward you with smaller scans and fewer surprises.
Without further ado, here are some common pitfalls and how to fix them:
- Currency stored as FLOAT64
Floating-point numbers are approximate, so cents can drift when you add thousands of rows. That’s why totals sometimes miss by a few pennies.
How to fix: Change the column to NUMERIC and backfill: SAFE_CAST(amount AS NUMERIC). Re-sum a sample day and compare it to your source system to confirm it matches exactly.
- Local time stored “as is” (mixed time zones)
“9:00” in Kyiv and “9:00” in New York aren’t the same moment. Mixing zones breaks comparisons and durations, especially around daylight saving changes.
How to fix: Convert to UTC on ingest and keep an extra column for the original zone if you need it (e.g., time_zone = 'Europe/Kyiv').
- IDs kept in INT64 when they can start with zeros
Numbers drop leading zeros (postal codes, SKUs, order IDs), changing the value and sort order.
How to fix: Store these as STRING. Add a simple format check (length/pattern) to catch bad values early.
- True/false stored as text (“yes/no”, “Y/N”, “True”)
Text flags are inconsistent and easy to mistype, which makes filters unreliable.
How to fix: Map to BOOLEAN once at ingest, for example: SAFE_CAST(LOWER(flag_txt) IN ('true','1','yes') AS BOOL). Reject anything that isn’t in your allowed list.
- Numbers trapped inside JSON as strings
You can’t safely add or average text values; hidden commas or currency symbols also cause failures.
How to fix: Extract and convert: SAFE_CAST(JSON_VALUE(payload, '$.amount') AS NUMERIC). If you use it often, promote it to a real typed column instead of re-casting in every query.
- Casting in every dashboard instead of once
Copy-pasted conversions drift over time, so each report may apply slightly different rules.
How to fix: Do the cleaning and casting once in a staging step or view, store the typed result, and have dashboards read that clean column.
- Mixing DATE, DATETIME, and TIMESTAMP without a rule
Off-by-hours or off-by-day bugs creep in when time types are used interchangeably.
How to fix: Pick the smallest correct type: use DATE for whole days (no time), DATETIME for local date+time (no zone), and TIMESTAMP for an exact moment in UTC. Write this rule down, so everyone follows it.
When you must change a type, add a new field, backfill with INSERT…SELECT, and point reports to it through a view. This avoids breaking downstream users while you retire the old version.
Good types are boring, but that’s the goal. Once columns are correct, queries read clearly, bytes scanned drop, and results align. That frees your time for questions that truly matter.
Tips for choosing the right data type for your use case
We’ve covered a lot of material, and you might be having a hard time digesting it all at once. No worries, this chapter will increase order and make it easier for you to pick the right data type in every situation.
The key point is that choosing data types is less about memorizing syntax and more about matching the shape of your data to the questions you ask. Think of this chapter as a practical filter you can run on every column before it goes live.
Here’s a simple, step-by-step list you can follow when picking among BigQuery data types:
- Start with the business meaning, not the feed format. Ask, “What is this value?” If it’s money, it needs exact decimals; if it’s a flag, it’s a boolean. Don’t let a CSV push you into storing everything as text.
- Use exact numbers for money and regulated figures. Choose NUMERIC (or BIGNUMERIC when ranges get huge) so totals won’t drift. Leave FLOAT64 for approximate metrics like sensor readings and ratios.
- Keep time simple: store UTC, convert for display. Use TIMESTAMP for exact moments and DATE for whole days. Convert to local time only in your app or BI layer to avoid daylight-saving and offset mistakes.
- Treat IDs as text when formatting matters. Postal codes, SKUs, and order numbers can have leading zeros or letters — store them as STRING. Add a short pattern check so bad values don’t slip in.
- Use booleans for true/false logic, not “yes/no” strings. BOOLEAN is tiny and fast to filter. Map any free-form inputs to true/false once during ingestion and reject unknown variants.
- Model repetition and grouping with arrays/structs. Arrays capture “many of the same”; structs capture “several related things.” This mirrors the real world and avoids exploding joins later.
- Store semi-structured payloads as JSON, but promote hot fields. Land the whole object in a JSON column, then extract common paths into typed columns for speed. This gives you flexibility without slow, repeated parsing.
- Cast once in staging, not in every report. Clean with TRIM, LOWER, or simple regex, then SAFE_CAST to the target type. Persist the typed result, so dashboards don’t duplicate conversion logic.
With these choices made deliberately, BigQuery can prune scans, keep costs predictable, and return answers that will satisfy you and your fellow teammates.
Pro tip: Keep a lightweight data dictionary next to your schema — type, units, timezone, and allowed ranges for each column. Two sentences per field will suffice.
Quick reference cheat sheet for BigQuery data types
To further ease your work with BQ, arm yourself with the below cheat sheet (namely, three of them for each data type). They summarize the BigQuery data types you’ll use most, with plain-English “use when” notes, a common pitfall to watch for, and a tiny example (literal or cast) you can paste into SQL.
If you need the “why,” jump back to the earlier chapters; if you just need a fast answer while writing a query in BQ (Google’s cloud data warehouse), start here.
Primitives (numbers & text)
Type |
Use when |
Quick notes |
NUMERIC |
Currency, exact math, regulated figures |
Exact decimal (38 digits, 9 scale). Avoid rounding drift. Example: SAFE_CAST(x AS NUMERIC). |
BIGNUMERIC |
Very large or ultra-precise decimals |
Exact decimal (76 digits, 38 scale). Scientific/crypto precision. |
FLOAT64 |
Trends, ratios, sensor metrics |
Approximate; not for money. Faster math, tiny error possible. |
INT64 |
Counts, strictly numeric IDs |
Whole numbers. If leading zeros/letters matter → use STRING. |
STRING |
Names, free text, IDs with formatting |
UTF-8 text. Clean with TRIM, LOWER, REGEXP_REPLACE before casting. |
BOOL / BOOLEAN |
True/false flags |
Compact and fast to filter. Map “yes/1/true” at ingest. |
BYTES |
Hashes, binary blobs |
Not human-readable; store checksums, encrypted tokens. |
Dates & times
Type |
Use when |
Quick notes |
DATE |
Whole calendar days (no time/zone) |
Example: DATE '2025-01-31'. Great for daily grouping. |
TIME |
Clock time only (no date/zone) |
Schedules/shifts. Example: TIME '14:30:00'. |
DATETIME |
Local date + time (no timezone) |
Use for local schedules; don’t mix with UTC math. |
TIMESTAMP |
Exact moment in UTC (global reporting) |
Store UTC; convert for display. Example: PARSE_TIMESTAMP('%FT%T%Ez', ts_txt). |
Complex & semi-structured (and spatial)
Type |
Use when |
Quick notes |
ARRAY< T > |
Repeated values in one row (tags, emails) |
Query with UNNEST. Keep arrays reasonably sized for interactive queries. |
STRUCT{...} |
Group related attributes into one column |
Dot notation to access each field (e.g., addr.city). Keep the context together. |
JSON |
Evolving payloads from APIs/webhooks |
Extract with JSON_VALUE/JSON_QUERY, then SAFE_CAST. Promote “hot” paths to typed columns. |
GEOGRAPHY |
Points, lines, polygons (spatial) |
Build points with ST_GEOGPOINT(lon, lat). ST_DISTANCE returns meters; ST_CONTAINS for regions. |
Conclusion
Take this guide as a toolkit you can apply to one table at a time. Each improvement — clean casts, clear IDs, right-sized time types — will make your queries easier to read.
Consider sharing a short “type policy” with your team and keep it near the code. You will be amazed at how much time you save each week by reusing typed views instead of fixing the same conversions in reports.
Keep the reference SQL prompts, data type choice lists, and tables close and refine as your needs change. The more you practice repetitive data management, the more you’ll feel the need to experiment and adjust (and it will be easier for you by that time). With a practical grip on BigQuery data types, you’ll quickly move from patching data to answers such as which channels convert, which regions lag, and which products drive revenue this week.