How to Choose the Right SQL Numeric Type — INT, BIGINT, DECIMAL & FLOAT Explained [DB Design Guide]

When you design a database table, how do you decide which numeric type to assign to each column?

“Just use INT for integers.” “Better safe than sorry — make it BIGINT.” “It has decimals, so FLOAT.” If these sound familiar, you are not alone. But this kind of default thinking can lead to performance issues, wasted storage, and even critical bugs in financial calculations down the road.

Consider this: swapping INT for BIGINT on a single column in a 100-million-row table adds roughly 400 MB of storage. Factor in two indexes and the gap exceeds 1 GB. On the other end of the spectrum, choosing INT for a high-volume log table that grows by millions of rows per day can cause an overflow crash a few years later, halting your entire application.

And then there is FLOAT. Most developers know that 0.1 + 0.2 evaluates to 0.30000000000000004 instead of 0.3. Yet FLOAT columns still end up in production payment tables, quietly accumulating rounding errors until monthly revenue reports stop adding up.

This article is a comprehensive guide to the four most important SQL numeric types — INT, BIGINT, DECIMAL, and FLOAT — covering their differences, trade-offs, and practical decision rules you can apply immediately.

💡 Tip

Database design is the foundation of any software system. For object-oriented design fundamentals, see the SOLID Principles Guide. For help choosing a programming language, check out the Programming Language Comparison Guide.

SQL Numeric Types at a Glance

Start with the big picture. The table below summarizes every major numeric type. Each section that follows dives deeper.

TypeSizeRange (approx.)PrecisionTypical Use
TINYINT1 byte0 – 255 / -128 – 127ExactFlags, status codes
SMALLINT2 bytes0 – 65,535ExactSmall counters
INT4 bytes~2.1 billion / ~4.2 billionExactIDs, quantities, counts
BIGINT8 bytes~9.2 quintillionExactLog IDs, large-scale PKs
DECIMAL(M,D)VariableM digits (D decimal places)ExactMoney, tax rates, ratios
FLOAT4 bytes±3.4 × 10³⁸ApproximateTemperature, sensor data
DOUBLE8 bytes±1.7 × 10³⁰⁸ApproximateGPS coordinates, statistics

The four types that matter most in day-to-day work are INT, BIGINT, DECIMAL, and FLOAT. Master these four and you will handle the vast majority of real-world database designs without issues.

Integer Types (INT Family) — The Default Starting Point

Integers are the fastest, most storage-efficient, and error-free numeric type in SQL. They win on computation speed, index efficiency, and disk footprint. If a column does not need decimal places, an integer type is always the right choice.

MySQL offers five integer sizes:

TypeSizeSIGNED RangeUNSIGNED Range
TINYINT1 byte-128 to 1270 to 255
SMALLINT2 bytes-32,768 to 32,7670 to 65,535
MEDIUMINT3 bytes-8,388,608 to 8,388,6070 to 16,777,215
INT4 bytes-2,147,483,648 to 2,147,483,6470 to 4,294,967,295
BIGINT8 bytes-9.2 quintillion to 9.2 quintillion0 to ~18.4 quintillion

The golden rule: if you can represent it as an integer, use an integer type. For example, if your system handles prices in whole cents (USD) or whole pennies (GBP), price_cents INT works perfectly. INT tops out at about 2.1 billion, which means it can handle amounts up to $21 million in cents — more than enough for most e-commerce products.

Common integer use cases:

  • IDs (primary keys): user_id, product_id, order_id
  • Quantities: stock_quantity, cart_count
  • Counters: login_count, view_count, retry_count
  • Status codes: order_status (0 = pending, 1 = paid, 2 = shipped …)
  • Boolean flags: is_active, is_deleted (TINYINT with 0/1)

There is no reason to use DECIMAL or FLOAT for any of these. Integers are the fastest and safest choice.

INT vs. BIGINT — Is “Better Safe Than Sorry” Actually Safe?

The single most common dilemma in integer type selection is INT versus BIGINT. The short answer: INT is sufficient for the vast majority of use cases. The long answer reveals why blindly upgrading everything to BIGINT is a costly anti-pattern.

First, let’s get a sense of scale. INT UNSIGNED tops out at about 4.2 billion. The population of the United States is roughly 330 million — barely 8% of INT’s capacity. A web service with 1 million users could store 4,000 log entries per user and still stay within INT range. For user IDs, product IDs, and order IDs, INT is almost always more than enough.

So when does BIGINT become necessary?

  • Access logs: A site handling 100 million page views per month accumulates 1.2 billion rows per year. Within 3–4 years, INT’s ceiling is in sight
  • IoT sensor data: 10,000 devices sending data every second generate roughly 315 billion rows per year — far beyond INT’s range
  • Distributed IDs (Snowflake, etc.): These encode a timestamp, worker ID, and sequence number into a single value that can be extremely large
  • Transaction IDs: A payment system processing 1 million transactions per day hits 3.65 billion in 10 years — dangerously close to INT’s upper bound

Now let’s quantify the cost of “just use BIGINT everywhere”:

ScenarioINT (4 bytes)BIGINT (8 bytes)Difference
100M rows × 1 column381 MB762 MB+381 MB
100M rows × 1 col + 2 indexes1.14 GB2.29 GB+1.14 GB
JOIN memory (estimated)Baseline~1.5–2×Reduced cache efficiency

On a 100-million-row table, upgrading a single column plus two indexes from INT to BIGINT wastes over 1.1 GB. Multiply that across several columns and several tables, and the total can reach tens of gigabytes — all of which increases disk I/O, shrinks buffer pool efficiency, and slows down queries.

A practical decision guide:

Column PurposeRecommended TypeReasoning
User IDINT UNSIGNEDVery few services exceed 4.2 billion users
Product IDINT UNSIGNEDSame reasoning
Order IDINT UNSIGNED or BIGINTDepends on volume and lifespan
Access Log IDBIGINTBillions of rows expected per year
Snowflake / Numeric UUIDBIGINTValues are inherently large
⚠️ Common Pitfall

Don’t base your choice on current row counts alone. What matters is the growth rate over the full operational lifetime. Estimate annual inserts, project 10 years ahead, and check whether INT UNSIGNED (4.2 billion) will hold. If it won’t, start with BIGINT from day one.

UNSIGNED — Double the Range for Free

In MySQL and MariaDB, adding UNSIGNED to an integer column removes the negative range and doubles the positive range at no extra storage cost. A regular INT spans roughly -2.1 billion to +2.1 billion; INT UNSIGNED spans 0 to 4.2 billion — same 4 bytes.

Columns like IDs, quantities, and counters are never negative. There is no reason not to use UNSIGNED for them.

UNSIGNED usage example
CREATE TABLE users (
  id          INT UNSIGNED AUTO_INCREMENT PRIMARY KEY,
  age         TINYINT UNSIGNED,          -- 0<=255 is plenty
  login_count INT UNSIGNED DEFAULT 0,
  point       INT UNSIGNED DEFAULT 0
);
⚠️ Common Pitfall

Subtraction between two UNSIGNED columns can underflow. In MySQL, SELECT a - b where a < b produces an enormous value (or an error) because the result wraps around. If subtraction is possible, use CAST(a AS SIGNED) - CAST(b AS SIGNED) or consider keeping the column SIGNED.

Note: PostgreSQL does not support UNSIGNED. The common workaround is a CHECK constraint (CHECK (id >= 0)) to enforce non-negative values at the application level.

DECIMAL (NUMERIC) — The Only Correct Choice for Money

DECIMAL stores numbers as exact base-10 values. Unlike FLOAT, it does not convert to binary internally, so 0.1 is stored as exactly 0.1. For any column where even a fraction of a cent matters — prices, invoices, taxes, account balances — DECIMAL is the only acceptable type.

Why not FLOAT? Let’s see the problem in action:

FLOAT vs DECIMAL precision comparison
-- What happens with FLOAT
SELECT CAST(0.1 AS FLOAT) + CAST(0.2 AS FLOAT);
-- Result: 0.30000001192092896 (not 0.3)

-- DECIMAL gets it right
SELECT CAST(0.1 AS DECIMAL(10,2)) + CAST(0.2 AS DECIMAL(10,2));
-- Result: 0.30 (exactly 0.30)

How does this tiny error matter in practice? Imagine an online store selling a $11.99 item, processing 50,000 orders per month:

  • With FLOAT: Each row may carry an error as small as +0.000001, but across 50,000 rows and multiple tax calculations, rounding discrepancies accumulate. Multiply by hundreds of SKUs over a year, and the ledger drifts by dollars — enough to trigger an audit investigation
  • With DECIMAL: Zero drift. Every aggregation matches to the cent, every time

“It’s only a millionth of a dollar.” True — but in accounting, if the books don’t balance to the penny, someone has to explain why. “We used the wrong column type” is not an answer any auditor will accept.

DECIMAL is declared as DECIMAL(M, D) where M is total digits and D is decimal places:

DECIMAL in practice
-- DECIMAL(10,2): 10 total digits, 2 decimal places
-- Max value: 99,999,999.99

CREATE TABLE orders (
  id         INT UNSIGNED AUTO_INCREMENT PRIMARY KEY,
  subtotal   DECIMAL(10,2) NOT NULL,  -- Pre-tax amount
  tax_rate   DECIMAL(5,4)  NOT NULL,  -- e.g. 0.0725 (7.25%)
  tax_amount DECIMAL(10,2) NOT NULL,  -- Tax
  total      DECIMAL(10,2) NOT NULL   -- Grand total
);
💡 Tip

DECIMAL and NUMERIC are synonyms in the SQL standard. MySQL, PostgreSQL, and SQL Server all treat them identically. Pick one and stay consistent across your codebase.

Sizing DECIMAL — Why DECIMAL(18,10) Is Almost Always Overkill

Another common mistake is specifying excessive precision: DECIMAL(18,10) or even DECIMAL(30,15) “just in case.” This wastes storage and slows aggregations.

DECIMAL’s storage size is proportional to its digit count. In MySQL, every 9 digits consume 4 bytes, with extra bytes for remainders:

TypeStorage (MySQL)Example Use
DECIMAL(5,2)3 bytesPercentages (up to 99.99%)
DECIMAL(10,2)5 bytesPrices (up to 99,999,999.99)
DECIMAL(12,4)6 bytesExchange rates (e.g. 1.3456)
DECIMAL(18,10)9 bytesOverkill for most applications

The right approach is to work backward from the maximum value:

  • E-commerce prices: If the highest price is a few million dollars, DECIMAL(10,2) covers up to $99,999,999.99
  • Sales tax rates: In the US, combined state + local rates can reach ~11%. DECIMAL(5,4) handles up to 99.9999%
  • Discount percentages: 0–100% fits in DECIMAL(5,2)
  • Exchange rates: EUR/USD at 1.0845 — DECIMAL(12,4) provides ample room
  • Cryptocurrency: Ethereum’s smallest unit (wei = 10⁻¹⁸ ETH) genuinely requires DECIMAL(36,18) — a rare case where extreme precision is justified

Using DECIMAL(18,10) for a retail price column is like printing a personal note on A0 paper. Right-size the precision, and you’ll save storage and speed up queries.

FLOAT / DOUBLE — Speed at the Cost of Exactness

FLOAT and DOUBLE are floating-point types that represent decimals using the IEEE 754 binary format. This gives them a major advantage and an inherent limitation.

The advantage is clear: a fixed 4 bytes (FLOAT) or 8 bytes (DOUBLE) can represent an enormous range of values. FLOAT alone covers ±3.4 × 10³⁸, and CPU floating-point hardware makes arithmetic extremely fast.

The limitation is that they store approximations, not exact values. The decimal 0.1 becomes the infinite repeating binary 0.000110011001100… which must be rounded to fit into a finite number of bits. This is the root cause of the infamous “0.1 + 0.2 ≠ 0.3” issue.

FLOAT is the right choice when small rounding errors do not affect the outcome:

  • Temperature readings: A factory sensor reports 23.45 °C with an intrinsic accuracy of ±0.1 °C. A storage error of ±0.0001 °C is meaningless
  • GPS coordinates: Six decimal places in latitude/longitude correspond to ~11 cm precision. DOUBLE preserves up to 15 significant digits — sub-millimeter accuracy that far exceeds any real-world need
  • Machine learning features: ML models have millions to billions of parameters; micro-level rounding in any single weight has negligible impact on overall accuracy
  • Physics simulations: Fluid dynamics and structural analysis rely on FLOAT’s speed, with error controlled at the algorithm level
  • Statistical aggregates: Means, standard deviations, and correlations are computed from data that already contains statistical noise

FLOAT vs. DOUBLE comes down to precision and storage:

TypeSizeSignificant DigitsWhen to Choose
FLOAT4 bytes~7Storage-sensitive, sensor data
DOUBLE8 bytes~15High precision needed: GPS, scientific computing

Storing GPS coordinates as FLOAT gives only ~7 significant digits — roughly 11 m precision. For a mapping application, DOUBLE (~15 digits, sub-millimeter) is the clear choice. Conversely, storing temperature sensor readings as DOUBLE is pointless when the sensor itself is only accurate to ±0.5 °C — FLOAT is more than sufficient.

FLOAT vs. DECIMAL — A Quick Decision Guide

In practice, you need to make this call quickly and confidently. Here is a reference table:

Use CaseDECIMALFLOAT / DOUBLE
Prices, invoices, billing✗ (never)
Tax rates, discounts
Loyalty points / miles✓ (if fractional)
Inventory weight (kg, lb)
Temperature / humidity
GPS coordinates✓ (DOUBLE)
Sensor data
ML features / scores
Statistical values (mean, etc.)

The rule of thumb fits in three lines:

  • Money is involved → DECIMAL
  • Measurement or science → FLOAT / DOUBLE
  • Not sure → DECIMAL (err on the safe side)

Memorize these three rules and you’ll rarely go wrong.

Five Common Design Mistakes

Numeric-type mistakes tend to be invisible during development and only surface in production. Here are the five most frequent ones.

Mistake 1: Making Every Column BIGINT

The “bigger is safer” mindset leads teams to default every integer column to BIGINT. As shown above, this can waste over 1 GB per 100 million rows on a single column plus indexes. Across 10 tables with 3 BIGINT columns each, that’s roughly 12 GB of wasted storage — directly impacting cloud hosting costs and buffer pool efficiency.

Mistake 2: Using FLOAT for Money

This is the most dangerous mistake on the list. It often passes all unit tests because rounding errors are invisible at small scale. The problem emerges in production when transaction volume grows: “Monthly revenue doesn’t match actual bank deposits.” Root-cause analysis takes days, and retroactively correcting FLOAT-stored financial data is extremely difficult.

Mistake 3: Over-Sizing DECIMAL Precision

Defining DECIMAL(30,15) “just in case” wastes storage and slows aggregation queries. Outside of cryptocurrency (where 18 decimal places are genuinely needed), very few business domains require more than 4 decimal places.

Mistake 4: Not Forecasting INT Overflow

A table may start with just a few hundred inserts per day, but growth can be exponential. INT SIGNED tops out at ~2.1 billion. With AUTO_INCREMENT at 50,000 inserts/day, that’s 117 years of headroom — but test data dumps, ID gaps, and unexpected growth can consume that margin faster than expected. Monitor your AUTO_INCREMENT high-water marks regularly.

Mistake 5: Mismatched Types in JOINs

If orders.user_id is INT but users.id is BIGINT, every JOIN triggers an implicit type conversion. In MySQL, this can prevent the optimizer from using indexes, turning a millisecond query into a multi-second full table scan. Always ensure that columns used in JOINs share the exact same type.

Recommended Types by Use Case

Use this quick-reference table when designing new tables:

Column PurposeRecommended TypeNotes
User IDINT UNSIGNED4.2 billion is plenty
Product IDINT UNSIGNEDSame reasoning
Log / Event IDBIGINT UNSIGNEDBillions of rows per year
Snowflake IDBIGINTInherently large values
Product priceDECIMAL(10,2)Max $99,999,999.99
Tax rateDECIMAL(5,4)e.g. 0.0725 (7.25%)
Discount rateDECIMAL(5,2)e.g. 15.50%
Exchange rateDECIMAL(12,4)e.g. 1.0845
Stock quantity (integer)INT UNSIGNEDNo decimals needed
Weight (kg / lb)DECIMAL(8,3)e.g. 12345.678
TemperatureFLOATSensor precision is sufficient
GPS lat / lngDOUBLEHigh precision required
ML featureFLOATSpeed over precision
Points (integer)INT UNSIGNEDNo fractions
Points (fractional)DECIMAL(10,2)Airline miles, etc.
Ranking positionINT UNSIGNEDRankings are never negative
Boolean flag (0/1)TINYINT UNSIGNEDMySQL’s BOOLEAN under the hood
Status codeTINYINT or SMALLINTSize to match the value range

Sharing a table like this across your team eliminates most type-selection debates during code review.

CREATE TABLE — Real-World Examples

Finally, here are three production-ready table definitions. Pay attention to how every column uses the smallest appropriate type.

Example 1: E-commerce Products

products.sql
CREATE TABLE products (
  id          INT UNSIGNED AUTO_INCREMENT PRIMARY KEY,
  name        VARCHAR(255) NOT NULL,
  price       DECIMAL(10,2) NOT NULL DEFAULT 0.00,  -- Money = DECIMAL
  tax_rate    DECIMAL(5,4) NOT NULL DEFAULT 0.0700, -- 7% = 0.0700
  stock       INT UNSIGNED NOT NULL DEFAULT 0,       -- Integer quantity
  weight_kg   DECIMAL(8,3),                          -- Shipping weight
  rating      FLOAT,                                 -- Average user rating
  is_active   TINYINT UNSIGNED NOT NULL DEFAULT 1,   -- Boolean flag
  created_at  DATETIME NOT NULL DEFAULT CURRENT_TIMESTAMP
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4;

Example 2: Access Logs

access_logs.sql
CREATE TABLE access_logs (
  id          BIGINT UNSIGNED AUTO_INCREMENT PRIMARY KEY, -- High volume
  user_id     INT UNSIGNED,               -- Matches users.id type
  status_code SMALLINT UNSIGNED NOT NULL,  -- HTTP 200, 404, 500...
  response_ms INT UNSIGNED,                -- Response time in ms
  created_at  DATETIME NOT NULL DEFAULT CURRENT_TIMESTAMP,
  INDEX idx_user (user_id),
  INDEX idx_created (created_at)
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4;

Example 3: IoT Sensor Readings

sensor_readings.sql
CREATE TABLE sensor_readings (
  id            BIGINT UNSIGNED AUTO_INCREMENT PRIMARY KEY,
  device_id     INT UNSIGNED NOT NULL,
  temperature   FLOAT,          -- Sensor precision is enough
  humidity      FLOAT,          -- Same
  latitude      DOUBLE,         -- GPS needs high precision
  longitude     DOUBLE,         -- Same
  battery_pct   TINYINT UNSIGNED, -- Battery 0-100%
  recorded_at   DATETIME(3) NOT NULL, -- Millisecond precision
  INDEX idx_device_time (device_id, recorded_at)
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4;

All three tables share a common trait: every column is defined with the smallest type that fits its purpose. IDs scale with INT or BIGINT as needed, money uses DECIMAL, measurements use FLOAT/DOUBLE, and flags use TINYINT. This is what well-designed schema looks like.

Summary — Type Selection Is Performance Engineering

Choosing a SQL numeric type is not just a formatting decision. It affects storage efficiency, index performance, query speed, data accuracy, and future scalability — all at once.

The essentials in four lines:

  • If it fits in an integer, use an integer type (fastest, smallest, zero error)
  • INT for most columns, BIGINT for high-volume logs (storage doubles)
  • DECIMAL for money — no exceptions (FLOAT errors are devastating in finance)
  • FLOAT / DOUBLE for science and measurement (speed and range where approximation is fine)

Above all, the guiding principle is: choose the smallest type that meets your requirements. Oversized types waste storage, reduce cache efficiency, and slow queries. Undersized types risk overflow. Getting it right means estimating the nature of your data (integer vs. decimal), the value range, the tolerance for error, and the growth rate over the system’s lifetime.

Numeric type design is unglamorous work, but getting it right from day one saves you from performance degradation, storage bloat, and production incidents down the road. Every time you write a CREATE TABLE, ask yourself: “Is this type truly the right fit?” That habit alone will elevate the quality of your database design.

FAQ

Q. What should I think about first when I’m unsure which type to use?

A. Start by asking: “Can this value be represented as an integer?” If yes, go with an INT type. Next, ask: “Does this involve money?” If yes, DECIMAL is the only answer. Everything else with decimals — temperatures, coordinates, scores — points to FLOAT or DOUBLE. Follow this sequence and 95% of decisions are instant.

Q. How painful is it to migrate from INT to BIGINT later?

A. MySQL’s ALTER TABLE ... MODIFY COLUMN can change the type, but on large tables it locks the table for minutes or even hours. Tools like pt-online-schema-change or gh-ost perform the migration with near-zero downtime, but they still require careful planning. Getting the type right initially is always cheaper than retrofitting.

Q. PostgreSQL doesn’t support UNSIGNED — what should I do?

A. That’s correct. The standard PostgreSQL approach is to use a CHECK constraint (CHECK (id >= 0)) to enforce non-negative values. Since PostgreSQL’s INT still holds up to ~2.1 billion, it’s sufficient for most workloads. If you genuinely need the 4.2 billion range, switch to BIGINT.

Q. How bad is FLOAT’s rounding error in practice?

A. FLOAT (4 bytes) has about 7 significant digits. Store 123456.789 in FLOAT and you might get back 123456.7890625. For sensor data or scientific measurements — where the instrument itself has limited precision — this is irrelevant. For financial calculations, it means invoices that are off by a cent and monthly totals that don’t reconcile. The rule is simple: never use FLOAT for money.

Q. Is DECIMAL really slower than INT? By how much?

A. In aggregation-heavy queries (SUM, AVG over millions of rows), DECIMAL can be 1.2–2× slower than INT. For typical OLTP workloads — individual SELECTs and INSERTs — the difference is negligible. Some teams store prices as integer cents (e.g. $19.99 → 1999) to gain INT speed, but this breaks the moment you need multi-currency support or sub-cent calculations. Starting with DECIMAL is the safer long-term bet.

Comments

Leave a Reply

Your email address will not be published. Required fields are marked *