this post was submitted on 12 Mar 2026
66 points (100.0% liked)

Technology

42472 readers
577 users here now

A nice place to discuss rumors, happenings, innovations, and challenges in the technology sphere. We also welcome discussions on the intersections of technology and society. If it’s technological news or discussion of technology, it probably belongs here.

Remember the overriding ethos on Beehaw: Be(e) Nice. Each user you encounter here is a person, and should be treated with kindness (even if they’re wrong, or use a Linux distro you don’t like). Personal attacks will not be tolerated.

Subcommunities on Beehaw:


This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.

founded 4 years ago
MODERATORS
 

The key to working at a place like Ars Technica is solid news judgment. [eds note: tell that to Benj Edwards] I’m talking about the kind of news judgment that knows whether a pet peeve is merely a pet peeve or whether it is, instead, a meaningful example of the Ways that Technology is Changing our World.

The difference between the two is one of degree: A pet peeve may drive me nuts but does not appear to impact anyone else. A Ways that Technology is Changing our World story must be about something that drives a lot of people nuts.

“But where is the threshold?” I hear you asking plaintively. “It’s extremely important that I know when something crosses the line from pet peeve to important, chin-stroking journalism topic!”

Fortunately, the answer is simple. The threshold has been breached when your local public transit agency puts up a sign about the behavior in question.

Which brings me to the sign I saw yesterday in Philadelphia.

“Unless the tea is REALLY hot, keep the call off speaker,” it said.

(For those not in the US, “tea” in this context means gossip or news.)

I fucking hate speakerphone and don't use it even in my van unless a photo or document is shared during the conversation that needs to be addressed.

you are viewing a single comment's thread
view the rest of the comments
[–] tal@lemmy.today 2 points 10 hours ago* (last edited 9 hours ago) (1 children)

I care less about speakerphone than I do Bluetooth headsets or regular phone speaker use near me.

The speakerphone makes more noise!

Yes, but people already have conversations between each other in public where we can hear both sides. We train ourselves to tune those out. A speakerphone is analogous to that case of another human talking.

What I find most disruptive about phone conversations near me versus listening to two other people talking (which I can tune out) is that the speech pattern of a phone user is to say something and then pause. The problem is that that is exactly the signal that someone has said something to you, and that your attention is required. I have a harder time ignoring those one-sided conversations than turning out a conversation where I can hear both sides, because it's basically constantly giving my head the "you just missed something and need to respond" signal. It's like when someone says something to you, waits for a few seconds, and then your attention gets triggered and you look up and say "what?"

Now, the article does also reference someone turning a speakerphone way up, and that I can get, if you're playing it louder than a human would speak. But that's also kinda a special case.

I think that in general, the best practice is to text, and I think that most would agree that that's uncontroversially the best approach in public. But after that, I'd personally prefer to have speakerphone use, above headset or regular phone use.

EDIT: One interesting approach


I mean, smartphone vendors would always like to have new reasons to sell more hardware, so if they can figure out how to make it work, they might jump on it


might be phones capable of picking up subvocalization.

https://en.wikipedia.org/wiki/Subvocalization

Subvocalization, or silent speech, is the internal speech typically made when reading; it provides the sound of the word as it is read.[1][2] This is a natural process when reading, and it helps the mind to access meanings to comprehend and remember what is read, potentially reducing cognitive load.[3]

This inner speech is characterized by minuscule movements in the larynx and other muscles involved in the articulation of speech. Most of these movements are undetectable (without the aid of machines) by the person who is reading.[3]

You'd probably also need some sort of speech synthesizer rig capable of converting that into speech.

A conversation where someone's using headphones/earbuds and a subvocalization-pickup phone would avoid some of the limitations of texting (not limited to text input speed on an on-screen keyboard or having to look at the display), provide for more privacy for phone users, and not add to sound pollution affecting other people in the environment.

EDIT2: Other possibilities for the speaker side:

Bone conduction

This has actually been done, but has some limitations on the sound it can produce, and you need to have a device in contact with your head.

https://en.wikipedia.org/wiki/Bone_conduction

Bone conduction is the conduction of sound to the inner ear primarily through the bones of the skull, allowing the hearer to perceive audio content even if the ear canal is blocked. Bone conduction transmission occurs constantly as sound waves vibrate bone, specifically the bones in the skull, although it is hard for the average individual to distinguish sound being conveyed through the bone as opposed to the sound being conveyed through the air via the ear canal. Intentional transmission of sound through bone can be used with individuals with normal hearing—as with bone-conduction headphones—or as a treatment option for certain types of hearing impairment. Bones are generally more effective at transmitting lower-frequency sounds compared to higher-frequency sounds.

The Google Glass device employs bone conduction technology for the relay of information to the user through a transducer that sits beside the user's ear. The use of bone conduction means that any vocal content that is received by the Glass user is nearly inaudible to outsiders.[47]

Phase-array speakers to produce directional sound

Here, you need to have the device track its position and orientation relative to a given user's ears, then have a phase array of speakers that each play the sound at just the right phase offset to produce constructive interference in the direction of the user's ears


it's beamforming with sound. Other users will have a hard time hearing the sound, which will be garbled and quieter, because of destructive interference in their direction.

https://en.wikipedia.org/wiki/Beamforming

Beamforming or spatial filtering is a signal processing technique used in sensor arrays for directional signal transmission or reception.[1] This is achieved by combining elements in an antenna array in such a way that signals at particular angles experience constructive interference while others experience destructive interference. Beamforming can be used at both the transmitting and receiving ends in order to achieve spatial selectivity. The improvement compared with omnidirectional reception/transmission is known as the directivity of the array.

We more-frequently use this for reception than for transmission, with microphone arrays, but you can make use of it for transmission. You'll need a minimum number of speakers in the array to be able to play beams of sound with constructive interference in the direction of a given number of listeners.

[–] Powderhorn@beehaw.org 4 points 9 hours ago

I can see where you're coming from on that at least with speakerphone, you know no one is addressing you. When I was at my ex's last week, she said something from the bedroom, prompting me to loudly say "What?" Her son had just called, and if he'd heard my voice, we'd not have parted ways on good terms.