Jump to content

jgazal

Returning Member
  • Posts

    175
  • Joined

  • Last visited

Everything posted by jgazal

  1. Dear Wachara, thanks for your reply. Perhaps I did not understand the electrostatic charges yet. Please help me to better understand how it works. I believe anodizing still insulate the listener body from the stator/diaphragm/amplifier circuit. I think you are also saying that. Nevertheless, I thought electrostatic was an imbalance of positive or negative charges in the surface of materials (but with extremely high voltages and almost no current), mainly when such materials are not conductors: My hair is not a conductor and still will stand up with electrostatic imbalance. The Mylar does not seems to be conductive, but still becomes charged with the circuit bias, am I right? As I understood (okay, understanding is not my strength...), such special coating does not reduce the anodizing insulation. It just avoid charge imbalances in the surface of the material. So its surface does not become electrostatically charged. Am I getting this wrong? We want the diaphragm to become electrostatically charged, but we want the earcup to stay neutral. But with no special coating, anodized aluminum will become charged if it has "intimate contact" with other insulating materials. If such charge is enough to cause imbalance at the stators and diaphragm, I do no really know. If an engineer put a electrostatic voltimeter at that earcup and he says it has low voltage, then I will be fine. There are two types of insulating materials. One type becomes positively charged and the other type becomes negatively charged. What type anodized aluminum is? Diaphragm is positively charged. If anodized aluminum becomes negatively charged, then an imbalance might occur when the AC signal turns the external stator positively charged, right? Is it strong enough to pull the diaphragm? I do not know. I am just trying to understand how it works. This is my not very reliable reference: There is other question I am trying to figure out. Suppose I have a plastic earcup. It is an insulating material. It might become electrostatically charged. It will be a positive charge or a negative charge? That charge is higher or lower in voltage that the electrostatic charge in the anodized aluminum earcup? If charges are similarly low, then the anodized aluminum would not need that special coating. This must be the most plausible hypothesis. Am I saying something really wrong?
  2. Is it necessary to apply some kind of special coating to the anodized aluminum ear-cups on the SR-009? I was trying to understand electrostatic charges in anodized aluminum and I have found this reference:
  3. VBA and google spreadsheets? Nice. edited: GAS - Google Apps Script
  4. Please sign me up for 1 set of the KGSSHV boards.
  5. Dr. Gilmore has already said everything I believe is needed to be said: @ Dr. Gilmore: please forgive me jumping in questions adressed directly to you.
  6. I knew you were going to like it. Stax knows that you are going to dismantel your unit anyway, so they decided to disclose the bowels themselves... )
  7. Have you seen this SR-009 video? http://www.ustream.tv/recorded/13640147 Previously posted on other sites. Words are sometimes misleading...
  8. Gosh, have a look at that stator. State of the art.
  9. EIFL is located in Fukushima, Japan. Are they operating?
  10. Yamaha Custom Tenor Saxophone YTS-875. Made in Japan. 5.5 Lbs of French brass. Aproximatelly 600 pieces. Highly skilled workers. 3 months from raw materials to finished product. Y 366,900 (Kakaku - Yamaha YTS-875). USD4272 (Music123 - Yamaha YTS-875) Very, very interesting http://www.youtube.com/watch?v=RqEoasEzyB4. I know that we cannot compare oranges and apples, but just for a reference value. At least the lacquer is made with an electrostatic process...
  11. Which input JFETs have you used in the challenge version? LSK170? 2sk170? Edit: Forgive-me, I have just read it that it is 2sk170
  12. Unfortunately, I do not have the schematics... Anybody? I totally agree with you.
  13. Thank you for answering. I am very intrigued now. I would like to compare XA-5400ES circuit to a dCS model, for instance: First we have all these Xilinx FPGA or CPLD with dCS custom code for "convert between the various digital standards" and then a digital signal goes to: ...this beautiful top board, which receives such digital stream and then uses some kind of comparator to create an analog current pulse by switching those flip-flops thought-out the resistors array, right? p.s.: I suppose dCS would not match these resistors into 0,1%, right? So that BB chip inside XA-5400ES is only doing what the first half of this dCS top board is doing, right? Converting a digital signal into am analog current pulse, right? Then we have the same opamp's integrated circuits to do the current to voltage conversion, although the dCS seems to have a lot of op-amp's... So that custom code to convert 16bit/44khz into a raw 1bit signal which feeds such comparator does influence the sound? I thought the analog stage had a major role in sound quality. The culmination question: did Sony put all the money into a conversion code? I thought they already had that kind of code to launch DSD in the first place. How studios can master a 1 bit signal without converting it into a PCM digital stream with multi-bit format? I thought that kind of conversion was already in the state of the art simply done by computer workstations with ordinary software. How digital filters and up-sampling or over-sampling relates to those programmable chips? Could Sony do some digital relevant digital interpolation before the BB DAC to justify the customization? At least is that what dCS claims with its custom software approach, right?
  14. Does someone really know what these circuits between the cd-rom drive and the BB DAC are supposed to do?
  15. In fact, it seems that some private messages were recovered. Besides some difficulties to identify the watched content (mainly "watched topics" or prior "subscribed threads"), everything seems better. Thank you. p.s.: is there a way to download/archive a txt from all the messages in "my conversations" folder?
  16. PCB message has one receiver: the crazy fellow who is building it. And you have already been sensitized by the information. If I wanted my friends to know what Stax is, then I would engrave such message in the front panel and leave those high voltages away of sight.
  17. It was just a theory... Now we can eliminate such possibility...
  18. Perhaps those on the schematics of the critical signal path (R9/R10 - R78/R79): And output resistors: Right? I am using those on the BOM:
  19. That's my etymological theory.
  20. I have never heard OSS (Optimal Stereo Signal) or Jecklin Disk. I will try to find those from the link you posted.
  21. Agreed. What I am still trying to figure out is the viable moment in the audio chain to make use of the pinna and external ear influence. I will try to make my idea more clear. If I record anything with a microphone sitting in the entrance of my ear channel, there will be no need for circumaural headphones. My pinna and external ear influence are fixed within the audio track. An in-ear headphone would reproduce those influences. Problem is regular recordings do not have my HRTF. Some recordings have microphone arrangement in a way they try to mimic the tympanum. Them circumaural headphone during the reproduction could do the trick of adding the pinna and external ear influence. Does this chain work in the real word? The distance between left and right tympanum determines the inter-aural difference varies from person to person. Elevation cues are mostly tonal differences. Here pinna and external ear makes a huge difference. If you record anything without your HRTF, I think there is no way of recover elevation cues. Then I think regular recordings always have a flat vertical axis. Lateral axis is easily achieved with a stereo-phantom effect (the delay between left and right channel). It is not the same with elevation. Then suppose you have those regular tracks (with mixing consoles or XY pattern) and a processor that is able to impose your own HRTF to the audio track (I am not a shill, but please read this). The position of your speaker is the base for that convolution (similat to the one Dusty mentioned). You could change the vertical axis manipulating those elevation cues. What is the result? Instead of having the virtual speaker in front of you, they will be on the ceiling (just as equal as your preferred music store). Will voices sound in an upper position regarding a guitar? I do not think so. You are going to have a flat vertical axis on the recording and a true vertical axis reproducing the virtual speakers (this processor allows you to chance the vertical axis of the virtual speaker). What can be done to solve that? I think the only way to reproduce that vertical axis without a personal recording would be to create phantom-stereo effect on the vertical axis also (and not only at the lateral axis – left and right). In theory, just 8 channels are needed (some would prefer 9 to separate non-directional bass signals). But then people would need to place speakers on the corners of a room, which is not the best place acoustically speaking. That is more and less what NHK has been testing: NHK develops 3D sound 22.2 multichannel headphone processor. Well, they are Japanese, which means perfection; and 22 + 2 channels and a lot of storage, computing power etc. @ Dusty. Please forgive going further on this subject. I still agree with you that we should be concerned with people, with musicians. It is just that the way we hear with only to audio channels have been always very fascinating to me. I am still searching for answers. @ faust3d. Please note that the mentioned processor measures the HRTF with speaker impulses within a certain room. If we were to use in-ear headphones no more filters would be necessary. But with cirumaural headphones, a second round of pinna and outer ear modifications takes place and a digital filter is needed. That’s why the processor measures impulse from headphones also. See that text for more details and relevant bibliographic references. @ head-case moderators and experienced users. Forgive me for the long post and do not shoot me. I do not want to be banned...
  22. Well, it might be the fact that left channel does not reach your right ear and vice-versa. Right? I do not know if driver's size is the relevant parameter for vertical positioning. That (vertical orientation) is something I find really difficult to sense with headphones (or even with speakers). I believe that vertical orientation is defined by your HRTF (Head Related Transfer Function) and that would be only possible to replicate with your very own binaural recordings.
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.