My breakout boards arrived today and it looks like they work perfectly. Now I'm able to look at the higher bit rate traffic too.
It looks like the HSKi pin has a 230.4 kHz clock in normal LocalTalk driverless mode as expected. When you choose the PB Adapter driver, it changes the entire communication method. Here are some captures. Blue is TxD-, yellow is RxD-, and pink is HSKi.
This is what it looks like when idle. As you can see, there's a 1.25 MHz clock on HSKi, along with periodic pulses on TxD followed by RxD.
Here's some traffic. For some reason the clock stops. Just judging by the way TxD is behaving, it's seems pretty clear to me that it's no longer FM0 encoding.
Again, weird long pauses on TxD. No idea what's going on here.
Zoomed out a bit, you can see some RxD traffic and you'll notice there are some long clock pauses too. Maybe when receiving, the Mac can stop sending pulses which tells the adapter to slow down because it can't handle traffic that fast?
I suppose they can do something a little more efficient knowing that collisions are impossible when they're hooked directly from an adapter to a Mac. So maybe they did it for efficiency, or maybe reliability. Who knows...I just thought it was interesting. I think any work I do on a project like this would probably stick with just using the standard LocalTalk FM0 encoding scheme for driverless compatibility. We'll see. Anyway, it was interesting to see what the faster mode does, but the real exciting thing is that now I should be able to easily tap into a spot in between the Mac and EtherWave and sniff some traffic at the 230.4 kbps rate once the SCC is working properly.
Awesome, and very useful to see the screenshots of the scope.
I was trying to think about it from the Mac side, which probably receives these as bytes from the SCC. Assuming the bits on the wire are 1:1 with the 1.25MHz clock, the mac should be receiving at roughly 160kbytes/s, and each packet is a maximum of 576 bytes, it'll be receiving at about 3.6 packets per ms.
On a 25Mhz 030, that'd be over 156 processor cycles for each byte received, which should be plenty even with 4 cycle memory access times.
Theoretically, it should get an interrupt for the packet being received, then be sitting in a loop with interrupts disabled, polling in the packet to a preallocated buffer, which I'd think can be completely handled by a 25MHz 030. I'd speculate pauses could happen between packets, since there's much more processing that has to happen once the packet is received. Ideally, the faster driver would have a ring of preallocated buffers, but at some point I could see that being saturated...
The big advantage to me isn't actually the throughput, but the increased responsiveness of the system during localtalk transfers. Since localtalk is sitting in a loop polling the SCC for each packet being received, the faster that happens, the less time interrupts are disabled. On a 230.4kbps transfer of a 576 byte packet could have interrupts disabled for as much as 20ms! And although the builtin localtalk driver has a bunch of extra code to mitigate that, having interrupts disabled for 3.6ms should result in a much more responsive system.
Thank god for the invention of DMA and background operations these days. LOL,
Anyway, If the SCC is difficult to deal with, why not just write SCC emulation for a high speed ARM, to perform FM0 decoding?
as far as custom interface/drivers are concerned, I wouldnt use the farallon driver and method. I would use basic localtalk to get something working, then implement something. Hell, for that matter, you could probably implement an even faster data transfer method than the farallon method. But you would need your own driver of course.
I think it becomes a matter of the lesser of two evils. The SCC, as annoying as it appears to use, is likely still easier than manually decoding FM0, especially with error conditions and little edge cases that we can be guaranteed the SCC will handle correctly. With that said, I do think there's some value in figuring out how to bit bang it considering if they stopped making the SCC we'd be screwed. For now I'm sticking with making it work with the SCC though.
I agree 100% and that's what I already said...regular LocalTalk is definitely coming first. I was just curious what exactly the EtherWave did for its speedup.
While you lot are working on your fantastic developments, would it be possible to find out a diagram of a PhoneNET connector box?
I know it uses an isolation transformer, but the wiring itself is up in the air. In fact, I don't even know which wire leads to TX and which one leads to RX.
The reason is that I have a 512Ke I'm working on and I can't figure out the proper wiring arrangement for the DE9 connector to hook up to phonenet.
I looked at: 1) the patent filing, 2) this page on CapNet, 3) and finally this page in German which at least tells me that TXD+ and RXD+ are joined together, and the same goes for RXD- and TXD-. But I'm not sure if the PhoneNET box is wired the same inside.
I'm not completely sure about the internals but I agree that TXD and RXD will be joined together since it's only using two wires as a single differential pair. It might be easier to find a Din-8 to DB-9 adapter to hook up instead of working inside the PhoneNET box. Also, I know there are PhoneNET boxes that exist with DB-9 connectors. They should be available on eBay.
This Apple knowledge base article should have the proper pinout for you if you want to make an adapter. Just in case Apple decides to take it down in the future (I wouldn't put it past them), here's a copy of the article content:
* Pins 1 and 3 on the DB-9 end are jumpered together.
The pins on the male end of the circular 8 connector are numbered as shown:
6 7 8
3 4 5
NOTE: The Macintosh Plus peripheral adapter cable is stamped with the
number 590-0341; when reordering, however, be sure to use the cable's
service part number, which is 699-0430 (older cables may be referred
to as 699-0372).
Today I wired something up on my breadboard and played around with it a bit. I'm not finished connecting parts together yet, but I got everything hooked up that I need to talk from a microcontroller to a Z85C30 SCC. You'll see I have several RS-485 transceivers sitting there doing nothing. Once I know for sure that I'm talking correctly with the SCC, I'll go there.
Obviously it's not pretty and I'm probably doing very nasty things to the signals by going through wires like this, but it's a start. This is an NXP LPC1114 LPCXpresso board with integrated programmer that I got quite a while ago. It's running at 48 MHz which should be plenty. BTW, the LPCXpresso IDE (it's Eclipse-based) has improved a ton since I last checked it out. Dare I say it was actually easy to use?
The red wire is actually supplying an 8 MHz clock to the SCC. I set up one of the LPC's timers to toggle at 16 MHz and output the value to a pin all in hardware, thus sending an 8 MHz clock signal to the SCC with no software overhead. The SCC is 5V and the LPC1114 is 3.3V (with 5V-tolerant IOs), but it seems to behave OK for now. There are things I can do if that causes trouble in the future.
I will probably need to get a more exact oscillator to make the final communication work properly. It looks like 3.6864 MHz may be the minimum requirement (230.4 kHz * 16) but I'm thinking I might try for 7.3728 MHz to kill two birds with one stone by giving it something close to 8 MHz while also being able to divide it down to exactly 230.4 kHz.
Other than the clock signal, I connected D0-D7 (to the LPC1114's P2.0 through P2.7, conveniently), and I also connected /CE, /RD, /WR, and D//C. I hardwired A//B to 5V so I always have channel A selected. I also hardwired /INTACK to 5V because I don't want to deal with interrupt stuff yet.
I wrote some quick routines to do read and write cycles. I haven't tightened the timings yet, so I went with some delays that are probably longer than they need to be. Once I'm in the mood, I'll try to get the timings down closer to their minimums. I tested out the read and write cycle routines by trying to read "Read Register 0" and "Read Register 1". I'm consistently getting a result of either 0xDC or 0xD4 for RR0 and 0x00 for RR1. The bit that's changing in RR0 is supposed to represent the state of a pin that is currently floating, so that's passing my sanity detector.
Next up: do a bunch of write cycles to configure the SCC to do something (probably in a simple asynchronous mode) and sniff the output on my scope to make sure it's all behaving properly. Then there will be the fun of figuring out how the SCC needs to be configured for LocalTalk, what to look for in its registers to determine when something happens, set up interrupts if they even end up being necessary, etc. This microcontroller doesn't have ethernet, but I'm just using it as a proof of concept for the SCC side. There's no question the Ethernet side is feasible. I'll bring out the big guns (probably the LPC1768) when I get that far.
I've got the pinouts for LocalTalk ports, but as for the RJ-11 pinouts, which one is TxD and which one is RxD? I may end up getting it backwards because somewhere the pins are crossed somehow so that one is TxD'ing into another's RxD.
I wrote a bunch of code to configure the registers of the SCC for some simple asynchronous communication...and...it didn't work at first. It turns out I made several mistakes and the results I was reading earlier were probably not completely garbage, but close to it. Follow along with me:
1) I didn't realize one of the pins I was using on the LPC1114 board is also hooked to the board's LED which I'm toggling at times, so it was screwing everything up.
2) My function that changes the direction of the data pins was always making the data pins into inputs because I rushed through writing the code yesterday and wasn't paying attention to what I was doing.
3) In the write cycle function I was accidentally writing a bool (for choosing data write cycle or command write cycle) instead of the uint8_t containing the data I wanted to write. That's what I get for naming the bool parameter "data". I've since changed the name to "isData" for more clarity.
After all of that, I can happily say I have communication with the SCC working. I'm giving it an 8 MHz clock input and I told it to use the baud rate generator to create a 31.25 KHz clock internally, output it to the TRxCA pin, and use it as the transmit baud rate. Fun fact: 31.25 kbps is the MIDI baud rate and it's convenient because it can be created perfectly by dividing from a 1, 2, 4, 8 MHz, etc. clock. I also configured the SCC to transmit in 8N1 format, and then made it periodically send the letter "A". Here's the scope looking at the TRxCA pin (blue) and the TxDA pin (yellow). This also gives me a chance to show off its RS-232 decoding capabilities
Mk.558: I'd definitely recommend starting another thread about your PhoneNet pinout question so it doesn't get buried in my updates on this project. Here's my understanding: I don't believe PhoneNet has separate wires for RX and TX. It has a single differential pair that is used for both RX and TX -- half duplex like techknight said, so you can either transmit or receive, but not both at the same time. That's why RX+ and TX+ can be wired together, and same with RX- and TX-. The question on the RJ11 side is not which pin is RX and which pin is TX, but rather it's which pin is + and which pin is -. I know this is how Apple's LocalTalk boxes worked, and I'm pretty sure it's how PhoneNet works too. PhoneNet is just a little bit more special because it doesn't have shielding/ground and it doesn't automatically terminate the unconnected ends.
I'm kind of held back right now while I wait for a 3.6864 MHz oscillator to arrive so I can match up to the 230.4 kbps baud rate to sniff some traffic. I also got an 8 MHz one so I can quit driving the PCLK signal from my microcontroller. In the meantime, I've been thinking a little bit about things.
There are basically three ways you can bring a clock signal into each channel of the SCC from the outside world:
1) The PCLK pin -- this is the main clock to the SCC, shared by both channels. It is given a 7.84 MHz clock signal on my IIci.
2) The RTxC pin of the channel -- has a 3.6864 MHz clock input hooked up on the Mac's logic board (usually, anyway...)
3) The TRxC pin of the channel -- hooked to the HSKi pin of corresponding serial port
PCLK or RTxC can be fed into the baud rate generator to create other baud rates, but the baud rate generator is really only capable of dividing its input clock by an even integer of 4 or more.
GttMFH2e claims my IIci's printer port RTxC pin actually has the same clock as PCLK. I had trouble tracing this out on my IIci and didn't want to break anything, so I'm not totally sure on this. I accidentally shorted something and it rebooted; don't want to do that again! It was a really strange-looking signal (I probably wasn't doing something correctly) and I was too afraid to short something again to look any closer.
From what I can tell, the PLL used for clock recovery when receiving FM0-encoded data (like LocalTalk) requires an input clock that is 16 times as fast as the actual bit rate -- so you need 3.6864 MHz for recovering a 230.4 kbps data stream. With all of this in mind, I can't figure out how you could possibly use external clocking with regular old FM0-encoded LocalTalk traffic. You'd have to put a signal that's like 8 MHz onto the TRxC pin (through the serial port's HSKi pin) just to double the standard bit rate. Is it even feasible to put a signal with that high of a bit rate onto one of the serial port pins without massive ringing? Maybe it is; just wondering. But even if you do that, the SCC has to be told to use that as its input clock instead of the RTxC pin. This makes me believe that something in software has to enable the higher clock rate.
Were any of the faster-clocked LocalTalk technologies driverless? I'm starting to doubt that it's even possible to externally clock a faster baud rate without a driver to reconfigure the SCC. I think it's making sense to me why the EtherWave switched to a different communication scheme. Perhaps they wanted to use one that didn't require a clock 16 times as fast on the HSKi pin.
All of this is moot for now anyway because I'm going to be getting it working with the standard 230.4 kbps baud rate. But...it's still interesting to think about the higher bit rate possibilities.
Of the very few things I have, I have a ton of 3.6864Mhz crystals and oscillator ICs. Back in the day I toyed with the AT90 series atmel processors, and they didnt have clock speeds higher than 8 to 10 Mhz. So, I used these guys to get UART compatible speeds.
Hehe, yeah. I looked at some old junk circuit boards first to see if one of them might have had such an oscillator, but no such luck. Makes sense that it would be a common frequency.
I don't have any scope images to share, but here's what the breadboard is looking like now. I'm lazy, so this is coming from my iMac's camera.
I have the 8 MHz oscillator going into the PCLK pin for the actual clock of the SCC. I got an 8 MHz SCC so I've maxed that out. The 3.6864 MHz oscillator goes into the /RTxC pin. From the /RTxC pin, the 3.6864 MHz clock goes directly into the DPLL which is supposed to take a clock 16 times the desired baud rate. This is for recovering the clock from received FM0 data. The 3.6864 MHz clock from the /RTxC pin also goes into the baud rate generator which divides it by 16 to turn it into 230.4 KHz for the transmit clock.
I've turned on the transmit enable, and now it's sending out nonstop flag bytes onto the TX pin.
Next on the list: wire up the SCC's RX pin to a transceiver so I can tap into the connection between the Iici and EtherWave. Then, hopefully, I can write some code that can dump received data, probably over the LPC1114's UART at a high baud rate if possible. It'll be interesting to see how that all works. Apparently the SCC will handle all of the CRC checking for me...
I guess I'm not sure what you're suggesting there It's actually working just fine with the crystal setup the way I have it. I'm pretty sure there has to be exactly a 3.6864 MHz clock going into the SCC because I have to feed a clock exactly that fast to the DPLL and use the baud rate generator to divide it by 16 to get the 230.4 kHz clock for transmitting.
Speaking of that, I have a hardware question for you: is there some kind of a clock divider IC you can buy for dividing by 2, or 4, or whatever? Is there some kind of a 7400 series chip that's a clock divider?
Today I wired up one of the RS-485/RS-422 transceivers to start listening on the TX+ and TX- pins. I finished writing code to set up the SCC, and I'm sniffing packets sent from the IIci to the EtherWave. I'm sending the sniffed results over the LPC1114's UART at 3 Mbps and grabbing it with a PL2303-based TTL to USB serial adapter. I'm too afraid to short RX+ to TX+ and RX- to TX- just in case that'll mess up the EtherWave, although I doubt it would hurt anything. The communication, at least in legacy 230.4 kbps mode, is acting like it's half-duplex even though it's wired up for full duplex. But it would be pointless; I know I'm sniffing the traffic properly, and I think that's all that matters.
Here's the current state of the breadboard:
And here's a peek at my breakout board in action giving me access to the RS-422 signals:
The SCC gave me some grief. First of all, I had some arbitrary delays in my code to ensure I wasn't trying to read/write too fast. These delays were causing me to lose received bytes from the SCC. I removed the delays completely and everything is still working OK. I guess just the fact that I'm bit-banging it automatically makes it slow enough for the SCC. I'll probably break out my scope to make sure I'm still within timings.
The next problem I had was that sometimes the last CRC byte would be treated as a brand new packet all by itself, or sometimes the first byte of the next packet would be treated as the last byte of the previous packet. It all depended on the order in which I was reading a couple of the SCC's status registers (RR0 and RR1). The magic combination that's working for me is this:
Read RR0, and if it has the "byte available to read" bit set, read RR8 to read a single byte and append it to the current packet.
If RR1's "end of frame" bit is set, write a command to WR0 to clear the error flags, and process the complete packet.
The final step of clearing the error flags clears the "bytes available" bit. Even though that bit auto-clears on the start of the next frame, there seems to be a race condition. There's a chance that this can happen: I read RR1 and it still has the "end of frame" bit set. I then read RR0 and it says data is available, so I read RR8 and it gives me a byte. It turns out that RR1's "end of frame" bit turned off in the meantime, so I get confused about what that byte is really supposed to be. There may be a better way to handle the reading. I'll experiment later, and it would be nice if I could think of a better way to do it that doesn't require another write cycle. Maybe after "end of frame" goes to 1, I should poll RR1 until the bit goes low, and then begin reading data again. I'll have to play around and see what I can discover.
I honestly think the SCC's interface for figuring out when the frame ends kind of sucks, but it's definitely easier than rewriting the whole logic from scratch.
The third problem I had was not the SCC's fault. I wrote an interrupt-driven UART driver to send sniffed packets out of the UART at 3 Mbps, figuring that would be super fast and wouldn't interfere with the SCC. Except I was using printf, which is incredibly inefficient for that. It was actually taking long enough to make me miss bytes from the SCC. I wrote my own little utility for converting the packets to an ASCII readable format and now everything's working fine.
I'm still having problems where sometimes the SCC quits giving me any kind of "end of frame" notices at all and I have to reset the LPC1114 to make it work again. It's not the UART driver's fault. The SCC quits working. Maybe I need to add some capacitors to the SCC to ensure it's powered OK, or maybe I truly am reading too fast from it now
I still have to get sending working, which will be hard to test until I'm actually converting packets from Ethernet. I guess I can look on a scope to make sure writing is working correctly. I'm almost tempted to hook this up to a Raspberry Pi or BeagleBone Black or something like that to do the Ethernet side of things. Dunno if there will be any strict timing requirements that have to be met, like the distance between RTS and CTS frames. (Do those even get sent over ELAP, or is that something I'll have to simulate for the LLAP side? I guess I can dump some ethernet traffic on the Mac mini to find out...)