This action requires saving a cookie to remember the page's state. You can read more about that on the cookies page.

 

Is 4k 4 times the resolution of 1080p?

Released on: 2024-06-25

When the consumer version of 4k arrived to the masses, marketing leapt into action with all sorts of claims. The most prevalent of which is

4k is 4 times the resolution of 1080p.

Right from when 4k started being talked about in tech spaces, there would usually be a comment like

That’s not how resolution works.

There’d be some nodding, and everyone would move on.

So imagine my surprise, many years later, to see Reddit be Reddit when this got repeated, someone tried to set the record straight and then got down-voted to hell. In fact, after digging further, the miss-understanding is so wide-spread that it lead me down quite a rabbit-hole fact checking everything I thought I knew and alternative understandings. Was I wrong about this?

This is what I found:

TL;DR

But the details are so much more interesting than this. Continue reading for more.

Table of contents

What is resolution?

Display resolution on wikipedia:

is the number of distinct pixels in each dimension that can be displayed.

There are lots and lots of contexts for measuring resolution listed on wikipedia, and it’s an utterly fascinating read. Given how prevalent confident and authoritative sounding voices were with the belief that resolution measures area, I was expecting at least one of the contexts to be done that way. Eg maybe there was a specialised field that does it that way that someone influential had come from. But I didn’t find a single context based on area.

All of the ones I found are based on the number of linearly countable steps. Usually based on the height and width of the display, but sometimes based on a physical unit. We’ll come back to this.

So based on that, let’s only look at a horizontal line for a moment: Out of B, C, and D, which one has twice the resolution of A?

A comparison of 4 1-dimensional resolutions.
Above: A comparison of 4 1-dimensional resolutions.

Hopefully, it was easy to choose C. Because 16 is twice the number of distinct steps compared to 8.

The confusion comes in when it comes to applying this to a multi-dimensional resolution.

A seductive theory

I’ve seen a few explanations that boil down to multiplying the megapixels (the number if pixels on the display). A.K.A. The area. All of them work the same way, but only one seems worthy of spending any time on. It goes like this:

Here is a 4x4 grid with a total 16 pixels:

A 4x4 grid.
Above: A 4x4 grid.

The theory goes that 4 times the resolution would be to have a total 4 copies of the grid, like this:

1 bigger grid made up from 4 copies of the original grid.
Above: 1 bigger grid made up from 4 copies of the original grid.

The rationale is that with resolution measuring how much detail is resolvable, having 4 times the area gives you 4 times the detail. This is indisputable, which is what makes this theory so seductive.

But it’s still wrong, because resolution measures the resolvable steps in a linear direction; not the area.

If you’re still in any doubt, I suggest revisiting “What is resolution?”, above. The references I’ve linked to are fascinating and well worth a read.

Dimensions vs area

Dimensions of a resolution provide all the information you need to easily calculate an area, but they are not an area. Eg:

4cm x 4cm are the dimensions of an area. 16cm2 is the area.

So taking that to a display:

1920x1080 is the resolution/dimensions. And 2073600 (2MP) is the area.

Just because it’s easy to calculate the area from the resolution, it does not make it the area. Remember, resolution is the number of distinct pixels in each dimension.

How do you not multiply a resolution?

Anything involving multiplying:

  • The pixel/megapixel count.
  • The area derived by the width and height.

The why is covered in the “A seductive theory”, above.

How do you multiply a resolution?

Because resolution measures the number of discrete linear steps, and we rely on each dimension having a consistent relationship with each other before vs after any scaling, we need to multiply all of the dimensions by the same multiplier.

Showing a resolution change between different numbers of dimensions.
Above: Showing a resolution change between different numbers of dimensions.

The relationship between pixel count and resolution can be described like this:

pixel count multiplier = resolution multiplier number of dimensions

Eg: If we multiply the resolution by 4, and the number of dimensions is 2, it would look like this:

pixel count multiplier = 42 = 16

So if our resolution is 1920 x 1080, and we want to multiply it by 4, we get:

new resolution = 1920x1080 * 4 = 7680x4320 = 8k

pixel count = 19201080 = 2073600 42 = 33177600 = 33 mega pixels.

But if multiply the resolution by 2, and the number of dimensions is 2, then it would look like this:

pixel count multiplier = 22 = 4

So if our resolution is 1920 x 1080, and we want to multiply it by 4, we get:

new resolution = 1920x1080 * 2 = 3840x2160 = 4k

pixel count = 19201080 = 2073600 22 = 8294400 = 8.3 mega pixels.

A rule of thumb

If you multiply the resolution by 2, you get 4 times as many pixels.

Non-square pixels

Normally it’s a reasonable assumption that the pixels have an equal height and width. Ie They are square. But that’s not always the case. The 4:3 resolution of 1280x1024 should have been 1280x960 for the pixels to be square in the 4:3 aspect ratio. Another notable example is the Windows 95 boot logo, which had a resolution of 320x400:

A close-up of the Windows 95 boot logo having a higher vertical resolution than the horizontal resolution.
Above: A close-up of the Windows 95 boot logo having a higher vertical resolution than the horizontal resolution.

When the pixels in a given resolution+aspect ratio are non-square, extra consideration has to be given into how the image behaves as something is rotated. Eg without this consideration, a circle looks like an oval, or a square like a rectangle.

Regardless, when multiplying the resolution of the display, we have to multiple all sides by the same amount as we would for square pixels.

The k actually has a meaning. What does it mean?

How k came to be 1024 in computing instead of 1000

In the metric system, the k denotes 1000.

In the early days of computing, hardware was comparatively basic and resources were really tight, so we had to use a lot of tricks to get anything to happen in a reasonable time frame. One of those was how we count stuff:

Without turning this into a binary lesson, the gist of it is that:

Number of bits Value of the last bit Possible range that you can count
1 1 0-1
2 2 0-3
3 4 0-7
4 8 0-15
5 16 0-31
6 32 0-63
7 64 0-127
8 128 0-255
9 256 0-511
10 512 0-1023
11 1024 0-2047
12 2048 0-4095
13 4096 0-8191
14 8192 0-16384
15 16384 0-32767
16 32768 0-65535

Side note: If you haven’t seen these numbers before, you’re going to start seeing them everywhere now.

In any case, when we are dealing with large numbers in the metric system, and want to use the k value rather than the raw number, we simply drop off the last 3 digits, and slap a k at the end. We might also do some rounding, but let’s leave that alone for now.

If we want to do that in binary, but have the output be human readable, the numbers don’t naturally work like that. We can achieve it with a few calculations, at the cost of some time, which adds up quickly if we have a few of them to do. But if we say that a k is 1024 instead of 1000, we can simply drop the last 10 binary digits and slap a k at the end when displaying the result.

So in computing

  • 1k is 1024,
  • 2k is 2048,
  • 4k is 4096,
  • 8k is 8192 etc.

Where the 4k term for resolutions came from

To make sure that films and TV series will continue to look good on improving consumer gear for a specific amount of time, the film and TV industries need to predict how quickly consumer gear will improve, and make sure that the quality of their work is at least matching where the consumer gear will be at that time. They therefore need to adopt higher resolutions well before the consumers do.

On top of that, they often work in an even higher resolution so that they have room to spare when they need to fix a shot that didn’t come out as intended. Eg they want to stablise a video, or crop in without losing sharpness.

As such, 4k began life in the film and TV world as resolutions with a width of around 4096. Eg:

4K resolution: A general term referring to any digital image containing an X resolution of approximately 4096 pixels.

See Visual effects in a digital world, Page 587.

These days, this has been dumbed down for a more general audience:

4K resolution refers to a horizontal display resolution of approximately 4,000 pixels.

See 4k resolution on Wikipedia.

The consumer 4k resolution of 3840x2160 loosely fits into this.

Note that in each case, the definition specifies the horizontal component of the resolution, but not the vertical component. This is to cater to different aspect ratios. So 3840x480 is just as much 4k as 3840x2160. This is exact opposite to how 1080p and 720p get measured where they are measured by the vertical component of resolution so that it stays consistent regardless of the aspect ratio. Before, and after this, we’ve used the horizontal component.

Other k resolutions

Given the confusion around what 4x1080p is, it’s not surprising that there is also confusion about other k resolutions. Note that the 1k and 2.5k resolution terms aren’t official, and lead to more confusion than they are worth:

k value Theoretical width Correct example Incorrect example
1k 1024 1024x576 1920x1080, 1280x720
2k 2048 1920x1080 2560x1440
2.5k 2560 2560x1440
4k 4096 3840x2160
5k 5120 5120x288
8k 8192 7680x4320
16k 16384 15360x8640
32k 32768

Notice how 1920x1080 is 2k. 2x2=4 -> 4k. 2 x the 1080p resolution is 4k.

You can read more about these and other resolutions on Wikipedia’s Display resolution standards.

Side note: I never realised that 2650x1440 exactly matches its theoretical width. That pleases me.

Resolution vs Density

Density is resolution, but instead of the base unit being the width and height of the display, it’s a physical unit like centimeters, or inches. Eg DPI.

Some of the contexts that I came across defined resolution as a density, such as “Printing resolution” using Dots per Inch:

in particular the number of individual dots that can be placed in a line within the span of 1 inch (2.54 cm).

Read more about Pixel density on wikipedia.

The grain of truth

While 4k doesn’t have 4 times the resolution of 1080p. It does have 4 times the pixel/megapixel count of 1080p.

So there is terminology to correctly describe this aspect. But resolution is not it.

Why you’d want to work with the area

When you double the resolution to go from 1080p to 4k, you quadrupal the amount of data required to represent an image. Or said another way:

4k has 4 times the data requirements of 1080p.

Also, 4k could show 4 times the area of a scene that 1080p can, while maintaining the same quality.

Both of these are valid reasons why you’d want to talk about 4k being 4 times the pixel count of 1080p. If you want to do this, here is a helpful tool that scales based on area.

If you still have any doubt as to why this is not 4 times the resolution, go to “What is resolution?”.

Why the correct usage of resolution is useful

Resolution tells you exactly how much detail you can resolve. Eg

You have two hairs stuck together, that together resolve to 1 pixel width. If you double the resolution, those two hairs will now resolve to 1 pixel each. You won’t see a gap between them yet, but they each have 1 pixel now.

If instead you took that original image, and doubled the pixel count (area), the two hairs would together resolve to 1.4 pixels. It’s still an increase in quality, but it’s not twice the detail. Those two hairs don’t each have their own pixel yet.

Does it matter?

If your only interest is the “Wow! Technology is really progressing!”, then it probably doesn’t matter for you.

But in any other context, using the terminology incorrectly while other people use it correctly, is sloppy communication that creates mistakes.

And it’s not a matter of being pedantic around marketing ruining terminology. The answer you get literally changes between the data throughput being multiplied by 4 or 16 when talking about 4 times 1080p. That has significant implications in deriving compatibility and getting the correct hardware for a given set of requirements and budget.

This post references

Over the last few years, there has been a lot of talk about whether you can make use of the full resolution on a phone with a 4K display. Let's dig in and actually understand this.

Posts using the same tags

It was time for a much needed update to the resolution app. Here's what I've improved.
The miss-understanding was so wide-spread that it lead me down quite a rabbit-hole fact checking everything I thought I knew and alternative understandings.
I covered a lot of detail in the resolution project. I was bound to get something wrong. This is where I'll post anything that comes up over time.
Most of the discussions around screen resolution have been something like "I don't want it; therefore you don't want it." Let's dive into that.
When I created my first test pattern for testing the phones, I couldn't see what I expected to see. So I dived into why. What followed was a fascinating journey.
When someone tells you that the maximum that you can perceive is governed by the size and spacing of your cone cells; they are wrong. Let's dive into why.
How much of your phone's screen resolution can you actually see? This app helps you quantify it.
When I set the the font size to my preferences, it looks fuzzy on a 1080p display, but not on a 4k display. Why?
Over the last few years, there has been a lot of talk about whether you can make use of the full resolution on a phone with a 4K display. Let's dig in and actually understand this.
Privacy policy for the Resolution Android app.
Home | About | Contact | Cookies | Site map