Ebay Classic organs

Collapse

Announcement

Collapse
No announcement yet.

Computer Technology in Digital Organs

Collapse
This topic is closed.
X
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Computer Technology in Digital Organs

    I have lately been curious about the hardware used in various electronic organs. I recently did some casual research on the sound processors that were available in digital keyboards and computer sound cards in the 1980's and 1990's. One thing I've never found much information on is the computer hardware used in various digital organs. That's something I would be very interested in knowing. Everyone says their technology is better, but everyone seems to be tight-lipped about the processors, memory, busses, storage, and other characteristics of the computer hardware in their products. If this information was more readily-available, it would be easier for someone (who knows what he is looking at(like me)) to more objectively determine what company uses better technology.

    I think I remember reading in the past that Ahlborn-Galanti's Drake Series organ is software-based. They also say that their construction is modular and can be upgraded. They say that they use a custom-designed 32-bit RISC CPU.

    Rodgers talks about a Roland "SSC" processor that integrates many system components onto a single processor. It doesn't look that powerful, compared to a PC processor. It looks like a 32-bit RISC chip. To be fair, I don't think you need a lot of power to play back samples, just a reasonably-fast bus and sufficient RAM.

    Predictably, there is very little information about the actual capabilities of these chips, or the amount of memory or storage there is available.

    Also, predictably, there is much less information from Allen about their technology. They talk about how they manufacture PCB's in-house, which sounds nice, and their console construction is pretty nice, but they don't name any of the chips or technologies they use. There is a video by Allen about their "DOVE" software, but nothing about hardware. I remember a few years ago, they were talking about convolution and asserted that their technology is much more sophisticated than anyone else's. I think I even remember someone implying that no ordinary computer could do what their machines do, which I know is a lie. They don't say a word about the processors, memory, expandability, or any of that nerdy stuff.

    The electronic organ companies are banking on selling their products to church committees, who unfortunately are most often ignorant when it comes to organs, music, and electronics.

    On the subject of Allen and 1990's tech, one of the things I read about is the SPC-700 processor. That is the sound processor the Super Nintendo uses. It played samples, which Allen MDS didn't do. They were still using a Fourier Transform method to generate sound wave models. I began to write an essay about the SPC 700, with the goal of comparing it to the hardware of the Allen MDS technology, but as I mentioned before, no one is very public about the processors they use. I can only compare them by comparing their behaviour. One modeled sound waves, and the other played-back samples. If the SNES had more SRAM, I think it could have worked as a digital organ. when SNES and Allen MDS came out, 1MB was a lot of memory. There is a video on Youtube of a sound clip of near CD-quality classical music being played on an SNES by a hobbiest.

    From what little information I was able to find on the hardware in digital organs, I don't think they are more powerful than PC hardware that was around at the time of their release. I even think they are in many cases (both now and 25 years ago) considerably less sophisticated than desktop PC hardware. I also don't think significant changes have been made to electronic organ technology in at least the last 10 years.

    Does anyone here know about the hardware in the competing digital organs, current-gen or in previous generations? How far off-base are my ideas?

    Thanks.

  • #2
    Two specifications you will hear talked about on digital organs are sample size and polyphony. To produce a certain number of notes all at once of different samples, of a sample of a particular number of bits , scan rate, and duration, requires a processor of a certain size. The organ companies will discuss these musical related specs some, but not the processor or memory size required to accomplish all that. Samples that include the reverb or ambient effect are much longer in duration than samples that just included the wave shape.
    You will find threads from a certain poster about the organs he sells, that model sound from a fourier point of view instead of a straight waveform analog. The buzzword he uses is "physical modeling". Mathematically waveforms can be described either in the time domain with the amplitude described in different times over one or many cycles, , or as the sum of an infinite number of frequencies of sine waves with different weighting coefficients. Sine waveforms are used in fourrier analysis, other waveforms like the euler are used in more common infinite series descriptions of other physical phemonenon than sound.
    As far as hardware, my 1985? Ensoniq EPS synthesizer was reputed to be built on the chipset of the Apple Macintosh. That is it's downfall, as the drive it used for storage was Apple compatible and has been out of production for 30 years. IC analogs can be bought, but not with the op system installed. The sound of that unit was a bit on the gritty side, as were the Allen MOS early organs IMHO, only more so in 1975.
    I find used computer hardware, however powerful, to be fairly useless without the developing system and copyrighted software used to develop them initially. I have some Lowrey printer boards I salvaged, with thousands of megasomethings of memory and processing, which I actually salvage the heatsinks power transistors and diodes off of. Even PC's, that are their own development system, matching up the development software to the box you've got is impossible. I've got a visual basic that won't put one word on the screen on any PC I own. And if you don't get the load disk, the op system the PC's came with (windows) will soon be made trash for you by Microsoft with their "updates". This was the problem with the original 8080 that took a $6600 development system to create software with. I have some 8051 boards salvaged that took the same $6600 box with some other $2000 development accessories to develop software. The 8051 instruction set lives on I read in the Arduino, which sells because it comes with its own native software development system.
    good luck on your endeavors.
    city Hammond H-182 organ (2 ea),A100,10-82 TC, Wurlitzer 4500, Schober Recital Organ, Steinway 40" console , Sohmer 39" pianos, Ensoniq EPS, ; country Hammond H112

    Comment


    • #3
      There are really two sub-categories of digital organs: those that run a fixed tone generation architecture on specialized chipsets that is simply run in a duplicated parallel fashion as needed to generate the necessary number of stops and divisions. In these, the tone generation path usually includes a processor running the TG algorithm, EPROM memory, and DAC chips and analog buffer stages on the same board, with a digital clock source on the board or on a main CPU board central to the system. For larger organs, there are simply more boards, and mix-down and signal distribution is usually done in the analog realm on a separate board.

      The second type are those that run a software-based tone generation algorithm on more generic DSP. These may also be run in multiples for larger instruments, but the architecture is much more flexible, and in theory, can be updated. Signal processing and mixing is largely done in the digital domain, although things like reverb may be accomplished by separate dedicated processors (as on the Gen I Allen Renaissance organs)

      The Allen MOS, ADC, and MDS organs are all examples of the first. As were the Galanti M-114-based Praeludiums and VLSI-based Chronicler organs.

      The first large-scale implementation I am aware of of the second type were the Rodgers PDI organs in 1991, which used fairly generic DSP running a custom program. At the time there were some limitations which have since been long overcome. The Ahlborn Galanti DRAKE and Allen Renaissance/Quantum organs are also examples of this. Pretty much everyone does things this way now.

      Most everyone is doing some version of sampling these days, with the notable exception of some of the Viscount organs. It's simply the easiest way to do it, and with the abundance of processing power and cheap memory available, a lot of variables can be accounted for just by ramping up the sample memory and processing dedicated to them. It's less elegant than physical modeling or similar technologies, but requires far less engineering investment, and the limitations of it can just be overcome by brute force. Hauptwerk is the ultimate example of this.

      Allen's "convolution" technology applied to their Virtual Acoustics reverb. This again has become commonplace as memory and processing have become more capable and cheaper.

      Comment


      • #4
        Is there a significant difference between Fourier and physical modeling? There are some electronic organ companies that rely heavily on what they call "physical modeling", and I don't like their sound. It is a little better than Allen, but it still sounds artificial. I never cared for Allen's Fourier Transform wave models either. I thought it was a little dishonest of them to bludgeon people over the head with their carefully-worded adverts for decades about how they sample real pipes and that's what you hear. They didn't actually have a sampled organ until 1999. I'd love to see that become a big, heated discussion, but I'll focus on it at a different time.

        On to the PC question. There are software-based virtual organs you can get and they run on a PC or Mac. They require more effort to use than a digital organ, but they are vastly-superior in sound. I used Hauptwerk on a Pentium 4 HT with 2GB of RAM and a Soundblaster Live! 24bit soundcard. When I did this, Core 2 Duo processors were the latest, so my Pentium 4 was already dated, but it worked very well! I used the Sound Blaster to create reverb and it was very realistic. Even without the reverb, it was significantly-better better than an Allen Quantum, which I had recently played. This makes me think it would be possible to write custom software that is designed to run on PC hardware. This would enable you to have much more powerful hardware without having to develop it.

        Comment


        • #5
          Originally posted by cantornikolaos View Post
          I have lately been curious about the hardware used in various electronic organs.

          Also, predictably, there is much less information from Allen about their technology. They talk about how they manufacture PCB's in-house, which sounds nice, and their console construction is pretty nice, but they don't name any of the chips or technologies they use. There is a video by Allen about their "DOVE" software, but nothing about hardware. I remember a few years ago, they were talking about convolution and asserted that their technology is much more sophisticated than anyone else's. I think I even remember someone implying that no ordinary computer could do what their machines do, which I know is a lie. They don't say a word about the processors, memory, expandability, or any of that nerdy stuff.



          On the subject of Allen and 1990's tech, one of the things I read about is the SPC-700 processor. That is the sound processor the Super Nintendo uses. It played samples, which Allen MDS didn't do. They were still using a Fourier Transform method to generate sound wave models.
          I'm pretty sure you're wrong on two counts. First when Allen introduced convolution reverb, it was unique in that it did the calculations for convolution in real time. I think they bought the exclusive rights to use that technology. Perhaps now there are other real time convolution reverbs out there.

          Second, I'm pretty sure the MDS organs (at least the later ones) did use samples albeit short ones. Maybe some Allen techs can set the record straight. Otherwise, what was the difference between ADC and MDS?

          Even Allen's earlier digitals started with recordings of real pipes that were analyzed and resynthesized.
          Last edited by radagast; 04-09-2016, 11:33 AM.

          Comment


          • #6
            I think the later MDS used at least partial sampling. Allen advertised "52.5 kHz sampling rate with x-megabits of memory."
            Allen MOS 1105 (1982)
            Allen ADC 5000 (1985) w/ MDS Expander II (drawer unit)
            Henry Reinich Pipe 2m/29ranks (1908)

            Comment


            • #7
              Originally posted by cantornikolaos View Post
              Is there a significant difference between Fourier and physical modeling?
              There's Fourier analysis. It's a way of deconstructing a waveform into it's harmonic components. It's not a model.

              Physical modeling is a methodology through which the characteristics of the sound are determined by the values of physical parameters fed into the model. The output of the physical model is fed to a synthesis system to produce the sound. This synthesis can be accomplished in any number and combination of ways.

              Sound synthesis can be categorized as either in the Frequency Domain ( additive and subtractive synthesis) or Time Domain (digital synthesis). Hammond organs and the Bradford/Musicom system are examples of additive synthesis systems while most of the analog organs of the last century used subtractive synthesis.

              All digital Allen, Rodgers, Ahlborn, Johannus, Walter, M & O, etc. organs use digital synthesis. All these organs are sample based. The first generation Allens used samples. It just that these samples were derived using Fourier analysis of a longer sample in order to distill the sound to a half waveform needed due to the memory and processing power available at the time. When we think of samples today, we think of much longer data samples that include the entire sound envelope, but the principals involved in reproducing them are no different from those employed in those early Allens.

              Fourier analysis can be used to determine the frequency components for additive or digital synthesis, but there is no such thing as Fourier modeling.

              Pentium 4 was already dated, but it worked very well! I used the Sound Blaster to create reverb and it was very realistic. Even without the reverb, it was significantly-better better than an Allen Quantum, which I had recently played. This makes me think it would be possible to write custom software that is designed to run on PC hardware. This would enable you to have much more powerful hardware without having to develop it.
              Well, as you noted, there's Hauptwerk, j-organ, etc. that already do that. There is little incentive for commercial builders to do that because they are already invested in proprietary hardware, it is that hardware that distinguishes them from competitors, and there are advantages to using dedicated hardware over off the shelf PC components
              -Admin

              Allen 965
              Zuma Group Midi Keyboard Encoder
              Zuma Group DM Midi Stop Controller
              Hauptwerk 4.2

              Comment


              • #8
                Originally posted by radagast View Post
                I'm pretty sure you're wrong on two counts. First when Allen introduced convolution reverb, it was unique in that it did the calculations for convolution in real time. I think they bought the exclusive rights to use that technology. Perhaps now there are other real time convolution reverbs out there.
                Convolution reverb by its very nature is always a real-time process- in which the impulse response of the acoustical space is convolved with the audio signal as it is input. This is why it's processing-intensive. Regular non-convolved reverb, including Allen's first-gen Virtual Acoustics, which are still quite good, is not- perhaps that's the difference you meant?

                Commercially-available convolution reverbs (such as AltiVerb) pre-dated Allen's use of the technology, and Allen is by no means an exclusive user of that technology. They MAY be the only organ maker using it, and one of the few hardware applications of it- most of the usual convolution reverb suspects are software plug-ins due to their processing-hungry nature.

                - - - Updated - - -

                Originally posted by organman95 View Post
                I think the later MDS used at least partial sampling. Allen advertised "52.5 kHz sampling rate with x-megabits of memory."
                When I was with Galanti, we always loved this twist of phrase, given that no one was really using megabits as a memory standard (8 megabits = 1 megaBYTE), except to make things sound more impressive than they were. The total number of Allen "megabits" in a typical instrument I seem to remember, seemed awfully slim for a linearly-sampled organ of the traditional variety.

                "52.5 kHz" is a really oddball "sampling rate" that literally no one else uses. I always interpreted that to be their clock frequency, given my evident belief that they weren't doing real-time traditional sampling until the Renaissance organs. Curiously, all that talk of "sampling rate" and "megabits" completely disappeared when the Renaissance organs, which DID use real-time sampling, came out, so I tend to believe that there were a lot of semantics at play in their marketing.

                All that said, the W5 MDS organs could be quite nice, and their MDS technology was quite up to the task of making very fine-sounding organs, especially in the larger instruments, and like all organs, assuming a good installation, finishing, and speaker setup.

                Comment


                • #9
                  Originally posted by michaelhoddy View Post
                  Convolution reverb by its very nature is always a real-time process- in which the impulse response of the acoustical space is convolved with the audio signal as it is input. This is why it's processing-intensive. Regular non-convolved reverb, including Allen's first-gen Virtual Acoustics, which are still quite good, are not.

                  Commercially-available convolution reverbs (such as AltiVerb) pre-dated Allen's use of the technology, and Allen is by no means an exclusive user of that technology. They MAY be the only organ maker using it, and one of the few hardware applications of it- most of the usual convolution reverb suspects are software plug-ins due to their processing-hungry nature.
                  The only convolution reverbs I know of before Allen started using it were NOT real-time. An impulse or "analysis" of the room would take place and then with a sample, the computer would do number crunching and produce the result which was supposed to sound like a particular sound played in a real physical location. Early examples were not real time. In other words it went like this:

                  1) An impulse sound would be produced in an acoustic space.
                  2) Microphones recorded the sound waves coming back to the microphone.
                  3) A computer would digitize and analyze the reflected sound waves producing a "model" of that acoustic space.
                  4) A digitized recording would be run through a convolution reverb program that uses that model.
                  5) Sometime later, maybe a few minutes, the convolution reverb program would calculate the results of how that recorded sound would sound like in that acoustic space.

                  Comment


                  • #10
                    Originally posted by indianajo View Post
                    You will find threads from a certain poster about the organs he sells, that model sound from a fourier point of view instead of a straight waveform analog. The buzzword he uses is "physical modeling". Mathematically waveforms can be described either in the time domain with the amplitude described in different times over one or many cycles, , or as the sum of an infinite number of frequencies of sine waves with different weighting coefficients. Sine waveforms are used in fourrier analysis, other waveforms like the euler are used in more common infinite series descriptions of other physical phemonenon than sound.
                    What you are describing is NOT physical modeling, but additive synthesis or resynthesis. Physical modeling uses mathematical calculations to describe the physical properties of a physical object, in this case an organ pipe and the associated wind supply. In real time these calculations produce, theoretically, the same result as a physical pipe with air blown through it. Viscount uses physical modeling in their PHYSIS organs and digital pianos. Additive synthesis organs were, I think, based on the MUSICOM system in England. Copeman-Hart is one organ brand. Veritas is another company that uses the MUSICOM system. Graham Blythe has been involved with this technology for years, I think. Veritas uses the term "modeling", but it's not physical modeling.

                    Comment


                    • #11
                      Originally posted by radagast View Post
                      The only convolution reverbs I know of before Allen started using it were NOT real-time. An impulse or "analysis" of the room would take place and then with a sample, the computer would do number crunching and produce the result which was supposed to sound like a particular sound played in a real physical location. Early examples were not real time. In other words it went like this:

                      1) An impulse sound would be produced in an acoustic space.
                      2) Microphones recorded the sound waves coming back to the microphone.
                      3) A computer would digitize and analyze the reflected sound waves producing a "model" of that acoustic space.
                      4) A digitized recording would be run through a convolution reverb program that uses that model.
                      5) Sometime later, maybe a few minutes, the convolution reverb program would calculate the results of how that recorded sound would sound like in that acoustic space.
                      That's exactly how it works, except for step 5. If there wasn't enough computer throughput to output a processed signal in near real-time, the continual signal input would eventually crash the (especially then limited) memory buffers. This is why real-time processing power has always been the limiting factor in this being a viable commercial product with workable processing latency and throughput.

                      In any case, Allen wasn't first to market with it. There were several really expensive hardware boxes (Dolby, Lake Audio, etc) in the 90's, and then Altiverb in 2001, as well as a few lesser players, all before Allen.

                      Comment


                      • #12
                        Originally posted by michaelhoddy View Post
                        That's exactly how it works, except for step 5. If there wasn't enough computer throughput to output a processed signal in near real-time, the continual signal input would eventually crash the (especially then limited) memory buffers. This is why real-time processing power has always been the limiting factor in this being a viable commercial product with workable processing latency and throughput.

                        In any case, Allen wasn't first to market with it. There were several really expensive hardware boxes (Dolby, Lake Audio, etc) in the 90's, and then Altiverb in 2001, as well as a few lesser players, all before Allen.
                        If I remember correctly, Allen licensed their technology from Lake Audio. Their systems are real time. But there are (or were) convolution reverb programs that will crunch the numbers in non-real time to produce the results of what a particular sound would sound like in a particular space. Allen's marketing was that their technology, licensed from Lake Audio, would do it in real time.

                        Comment


                        • #13
                          Hi,

                          Just to say, bits and bytes, just by numbers is not a good way to determine final results in terms of sound. There are proprietary means of procuring samples, proprietary means of processing, audio channelling, speaker designs, etc, along with setup, voicing, voicing software etc, tht all contribute to the final sound.

                          That is why new digital organs don't sound a whole lot better than they did 15 or 20 years ago, even though on paper they should sound so much better.

                          AV
                          Last edited by arie v; 04-09-2016, 11:47 AM.

                          Comment


                          • #14
                            Originally posted by radagast View Post
                            If I remember correctly, Allen licensed their technology from Lake Audio. Their systems are real time. But there are (or were) convolution reverb programs that will crunch the numbers in non-real time to produce the results of what a particular sound would sound like in a particular space. Allen's marketing was that their technology, licensed from Lake Audio, would do it in real time.
                            What year did Allen start using this? I always was of the impression that it was in the second generation of Renaissance organs.

                            There were and are definitely computer programs that would convolve short samples of audio with a reverb impulse response before the Dolby and Lake products, but those were more laboratory novelties than useful audio processing tools.

                            Comment


                            • #15
                              Originally posted by michaelhoddy View Post
                              When I was with Galanti, we always loved this twist of phrase, given that no one was really using megabits as a memory standard (8 megabits = 1 megaBYTE), except to make things sound more impressive than they were. The total number of Allen "megabits" in a typical instrument I seem to remember, seemed awfully slim for a linearly-sampled organ of the traditional variety.

                              "52.5 kHz" is a really oddball "sampling rate" that literally no one else uses. I always interpreted that to be their clock frequency, given my evident belief that they weren't doing real-time traditional sampling until the Renaissance organs. Curiously, all that talk of "sampling rate" and "megabits" completely disappeared when the Renaissance organs, which DID use real-time sampling, came out, so I tend to believe that there were a lot of semantics at play in their marketing.

                              All that said, the W5 MDS organs could be quite nice, and their MDS technology was quite up to the task of making very fine-sounding organs, especially in the larger instruments, and like all organs, assuming a good installation, finishing, and speaker setup.
                              What do you mean by "real-time sampling"? All sample playback instruments play samples back in real time. And Allen's use of megabits indicates they WERE doing sample playback, just with very short samples. The oddball frequency was used to try to eliminate audio artifacts that were present in slower clock rates. The Nyquist theorem stated that 40khz was sufficient to produce 20khz frequency response in a digital signal, but it turned out that in the real world it didn't quite work that way.

                              - - - Updated - - -

                              Originally posted by michaelhoddy View Post
                              What year did Allen start using this? I always was of the impression that it was in the second generation of Renaissance organs.

                              There were and are definitely computer programs that would convolve short samples of audio with a reverb impulse response before the Dolby and Lake products, but those were more laboratory novelties than useful audio processing tools.
                              They started using it whenever they started using the term "Acoustic Portrait". I think it was when the Quantum line came out but I am not sure of that.

                              Comment

                              Hello!

                              Collapse

                              Looks like you’re enjoying the discussion, but you haven’t signed up for an account yet.

                              Tired of scrolling through the same posts? When you create an account you’ll always come back to where you left off. With an account you can also post messages, be notified of new replies, join groups, send private messages to other members, and use likes to thank others. We can all work together to make this community great. ♥️

                              Sign Up

                              Working...
                              X