this post was submitted on 28 Jul 2024
220 points (97.4% liked)

Photography

4524 readers
84 users here now

A community to post about photography:

We allow a wide range of topics here including; your own images, technical questions, gear talk, photography blogs etc. Please be respectful and don't spam.

founded 1 year ago
MODERATORS
 

The nyquist sampling theorem is a cornerstone of analog to digital conversion. It posits that to adequately preserve an analog signal when converting to digital, you have to use a sampling frequency twice as fast as what a human can sense. This is part of why 44.1 khz is considered high quality audio, even though the mic capturing the audio vibrates faster, sampling it at about 40k times a second produces a signal that to us is indistinguishable from one with an infinite resolution. As the bandwidth our hearing, at best peaks at about 20khz.

I’m no engineer, just a partially informed enthusiast. However, this picture of the water moving, somehow illustrates the nyquist theorem to me. How perception of speed varies with distance, and how distance somehow make things look clear. The scanner blade samples at about 30hz across the horizon.

Scanned left to righ, in about 20 seconds. The view from a floating pier across an undramatic patch of the Oslo fjord.

*edit: I swapped the direction of the scan in OP

you are viewing a single comment's thread
view the rest of the comments
[–] AnarchoSnowPlow@midwest.social 22 points 3 months ago (4 children)

Little nitpick. Nyquist frequency is at least 2x the maximum frequency of the signal of interest.

The signal of interest could be something like ~20kHz (human hearing or thereabouts) or it could be something like a 650 kHz AM radio signal.

Nyquist will ensure that you preserve artifacts that indicate primary frequency(ies) of interest, but you'll lose nuance for signal analysis.

When we're analyzing a signal more deeply we tend to use something like 40x expected max signal frequency, it'll give you a much better look at the signal of interest.

Either way, neat project.

[–] biber@feddit.org 9 points 3 months ago (1 children)

Double nitpick, according to Wikipedia, your definition is a "minority usage". I teach signal Processing and hadn't heard of that one, so thanks for pointing me to it!

Nyquist as half sampling rate is what I use

https://en.m.wikipedia.org/wiki/Nyquist_frequency#Other_meanings

[–] AnarchoSnowPlow@midwest.social 3 points 3 months ago

Neat! I've definitely originated misunderstandings based on that. I wonder if it comes from my signals class lol

[–] volodya_ilich@lemm.ee 4 points 3 months ago

Little nitpick to you:

Nyquist will ensure that you preserve artifacts that indicate primary frequency(ies) of interest, but you'll lose nuance for signal analysis.

When we're analyzing a signal more deeply we tend to use something like 40x expected max signal frequency, it'll give you a much better look at the signal of interest.

This is because your signal of interest, unless purely sinusoidal, has higher frequency features such as harmonics, so if you sample at Nyquist you'd lose all of that. Nyquist theorem still stands, it's just you wanna look at higher frequency than you realize because you wanna see higher frequency components of your signal.

[–] Hugin@lemmy.world 4 points 3 months ago (1 children)

I'll add that frequencies above the Nyquist point fold into lower frequencies. So you need to filter out the higher frequencies with a low pass filter.

This isn't just for audio but any type of sampling frequencies. For example in video when you see wagon wheels going backwards, helicopter blades moving slowly, or old CRT displays where there is a big bright horizontal line sweeping. That's all frequency folding around half the frame rate.

It also applies to grids in images. The pixels in a display act a sample points and you get frequency folding leading to jagged lines. Because a line segment is a half cycle of a square wave with a period of twice the length.

When that segment is diagonal it become a two dimensional signal with even higher frequency components in each axes. This leads to jagged diagonal lines. This is called aliasing as the higher frequencies have an alies as lower frequencies.

So when you apply antialiasing in video games it's doing math to smear the signal along the two axis of the display. This makes a cleaner looking line.

[–] Leavingoldhabits@lemmy.world 2 points 3 months ago

I love the small insights into signal theory/processing generated by this image, this is really cool stuff! Thank you for chiming in.

[–] Leavingoldhabits@lemmy.world 3 points 3 months ago

Thanks! Your nitpicking is most welcome.