this post was submitted on 03 Jul 2023
15 points (94.1% liked)

Technology

34879 readers
50 users here now

This is the official technology community of Lemmy.ml for all news related to creation and use of technology, and to facilitate civil, meaningful discussion around it.


Ask in DM before posting product reviews or ads. All such posts otherwise are subject to removal.


Rules:

1: All Lemmy rules apply

2: Do not post low effort posts

3: NEVER post naziped*gore stuff

4: Always post article URLs or their archived version URLs as sources, NOT screenshots. Help the blind users.

5: personal rants of Big Tech CEOs like Elon Musk are unwelcome (does not include posts about their companies affecting wide range of people)

6: no advertisement posts unless verified as legitimate and non-exploitative/non-consumerist

7: crypto related posts, unless essential, are disallowed

founded 5 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] IncidentalIncidence@feddit.de 1 points 1 year ago

I can tell you with confidence that DACs can only convert digital sound data into analogue, and that’s due to the audio jack being older than digital audio.

Right. But the principle is the same; hardware that isn't compatible with pre-existing systems has a control circuit, and a digital interface. The digital computer sends instructions to the controller, and the controller carries out the instructions.

An analogue device isn’t compatible with a digital device, much like how digital sound data (songs, audio tracks in videos, system sounds, etc…) and analogue audio don’t technically work.

Correct. That is why there is dedicated control circuitry designed for making analog and digital systems talk to each other -- as there will be for optical analog computers and every other type of non-conventional computing system.

It's true that conventional systems will not, by default, be able to communicate with analog computers like this one. To control them, you will send the question (instructions) to the control circuitry, which does the calculation on the hardware, and returns an answer. That's true for DACs, it's true for FPGAs, it's true for CPUs, it's true for ASICs.

Every temperature sensor, fan controller, camera, microphone, and monitor are also doing some sort of conversion between digital and analog signals. The light being emitted by the monitor to your eyes is a physical phenomenon that can be measured as an analog value (by taking a picture of your computer monitor on film, say). How does your monitor produce this analog signal? It has a control circuit that can take digital commands and convert them into light in specific patterns.

Using an analogue device to accelerate something requires at least some information to be lost on translation, even if the file size is stupidly large.

I don't think you've understood what analog computers are used for (actually, I'm not sure that you've understood what analog computing even really is beyond that it involves analog electrical signals). Analog computers aren't arbitrarily precise like digital computers are in the first place, because they are performing the computation with physical values -- voltage, current, light color, light intensity -- that are subject to interference from physical phenomenona -- resistance, attenuation, redshift, inertia. In other words, you're really worried about losing information that doesn't exist in a reliable/repeatable way in the first place.

A lot of iterative numerical methods need an initial guess and can be iterated to an arbitary degree. Analog computers are usually used to provide the initial guess to save iteration flops. The resolution just is not that important when you're only trying to get into the ballpark in the first place.

In other words, this computer is designed to solve optimization problems. Say you're getting results based on the color and intensity of the light coming out of it, right, like you might get values of tides based on electrical voltage on an old desktop analog computer. It's not that relevant to get the exact values for every millisecond at a sampling rate of a bajillion kilohertz; you're looking for the average value that isn't falsely precise.

So if you were designing an expansion card, you would design a controller that can modulate the color and intensity of the light going in, and modulate the filter weights in the matrix. Then you can send a digital instruction to "do the calculation with these values of light and these filter values". The controller would read those values, set up the light sources and matrix, turn on the light, read the camera sensors at the back, and tell you what the cameras are seeing. Voila, you're digitally controlling an analog computer.