Wednesday, July 23, 2008

Moving Blog site

I'm moving my blog to wordpress and my own site. It's new location will be http://www.ceaselessone.com/student. It should look very familiar as I tried to make it pretty much the same and exported everything... Some minor changes were inevitable :(

The new site should be on Planet Olin soon. Peace Blogger!

What I'm doing this summer - part 3

So far I've talked about reconfigurable hardware in general and then the specific set-up of the FPAA. Now I'll start talking about what's actually inside the CABs (computaional analog blocks).

First there are simple transistors. There's both nMOS and pMOS varieties and you have access to their gate, source and drain - fairly straightforward. There are capacitors where you have access to both leads. There are some current mirrors which make it simple to replicate a current without having to go through the process of mirroring the current yourself at the transistor level. Then there are a couple of more complex parts. These are the Gilbert multipliers and the OTAs.

Gilbert multipliers are probably the first use of the translinear principle (TLP). In short, this concept allows you to multiply and divide currents by using logarithms and then addition and subtraction and then exponentiating the result again. I'd explain it further, but I actually made a wiki page for my OSS (Olin Self-Study) that I really like. There's actually a two-quadrant multiplier at the end that should give you a good idea of how these things could work. If some parts of the wiki seem difficult to understand due to the writer, please let me know and I can hopefully fix it up. Or feel free to fix it up yourself if you're familiar with TLP. Gilbert multipliers are great because they allow you to directly multiply two signals at substantially faster-than-digital speeds with much less power usage and with orders of magnitude fewer transistors. The catch you ask? Well you can only really trust about 6 bits of a signal and then multipliers love to overflow. So your input signals can only be about 3 bits if you want to guarantee that you don't get overflow... There are workarounds to maintain precision and speed, but they involve adding parts and complexity. This of course makes you use more power, but you still get orders of magnitude more efficiency than a digital multiplier.

This leaves just one part. The OTA (operational transconductance multiplier). These are actually fairly simple and extremely powerful. They have a differential voltage input and output a current that's proportional to the difference in the input voltages. As a simple example, let's do a voltage follower. Put your signal on the positive input. Now tie the negative input to the output. The output will now source or sink current until the output equals the input. This is somewhat similar to an op-amp, but different because the output is a current instead of a voltage.

OTAs are actually quite powerful and provide a fairly straight-forward way to go from a transfer function or differential equation to a circuit. I'll show you that in my next post...

Monday, July 21, 2008

What I'm doing this summer - part 2

All right. Now that you've taken a quick view of reconfigurable hardware, we can look at the FPAA in particular. How is it different from an FPGA? Well the main difference is how information is encoded.

Instead of passing information as digital ones and zeroes, analog processing allows you to take advantage of the full available voltage range and even curents to carry information. This means you can do great things like carry 5-9 bits of information on a single wire. So what's the catch? Well analog signals aren't as pretty as digital signals. If you only have to differentiate between 0 and 1, you're going to have a much easier time at it than if you have to be able to differentiate 2.4 and 2.5. And in some cases it gets much worse than that - if you're using a voltage as to control a transistor's gate, minute changes are vastly amplified. This means that you're accutely vulnerable to many things that digital systems barely have to consider.

Are your lines near each other? They'll have parasitic capacitances that'll couple them. Did you route it using a global line instead of 2 nearest-neighbor lines? You can expect different performance and in some cases even complete loss of functionality since the parasitic capacitance from routing is extremely significant. Are you in place that has a temperature? If so, you'll get thermal noise that'll mean that all of your signals have some randomness associated with them. These problems, especially the last one, are not easy to solve.

So why bother? Well, like I mentioned, you can carry more data on a single wire. Why does this matter? Well you can do things like run two relevant signals into a multiplier that uses only a handful of transistors and get out a direct answer. This is far faster and less power-hungry than the digital method of splitting your signal up to 1 bit per wire, running it into a many-stage multiplier that is far slower and takes hundreds of times more transistors and eats through way more power. Sound awesome? That's because analog is awesome - just difficult.

So how does an FPAA work? Let me start by describing the structure. There are two main parts to the structure, the routing grid and the CABs (Computational Analog Blocks). If we look at a single column, there are lines that run the length of the chip (global verticals), lines that run from one CABto a neighboring CAB (nearest neighbor lines), and lines that only run within a CAB (local lines). Each horizontal line is attached to something in the CAB. Thus, to connect components, you just have to attach two horizontal lines to the same vertical line. This is done by turning on switches. Additionally, there are horizontal lines that can handle inter-column connections in a similar manner.

The actual switches used in the FPAA are (who'd guess it) analog switches. They are actually capable of being partly on. This allows the connections to take part in calculations if you're clever enough to work them into your design. The way these switches work is by using floating gate transistors. Transistors can pass an amount of current that's controlled by their gate voltage - building up charge on a floating gate allows you to set a voltage and then simply leave it alone without having to constantly source the appropriate bias voltage.

So what's in the CABs? Well this actually varies. Some of the common things in CABs are nMOS and pMOS transistors, capacitors, OTAs (Operational Transconductance Amplifiers), Gilbert Multipliers and current mirrors. What are all of these? I'll give some explanation in my next post.

What I'm doing this summer - part 1

So I'm working with FPAAs this summer. What's an FPAA? Well, it's not too inaccurate to say that they're like FPGAs except analog... But in case that doesn't mean much to you, we'll take a quick walk down the:

History of Reconfigurable Hardware
Many moons ago, somebody thought of the idea of using a read-only memory (ROM) as a simple programmable logic device (PLD). The idea here is that a memory maps an address to an output. Let's say we have n address bits. There are 2n possible logic functions using these inputs. A ROM will have an m output bits. Of the 2n possible logic functions, we can consider that our memory mapping is effectively implementing m of the logic functions. If we think about it this way, it becomes evident that a ROM will allow us to implement m logic combinations of n inputs and can be programmed to implement any logic combination of our choice. If you've heard of PROMS (Programmable), EPROMS (uv Erasable) or EEPROMS (Electrically Erasable), this is the concept behind them. They're perfectly general, but they're slow, power-hungry and even glitchy during transitions.

In the late 1970s another methodology that used programmable array logic (PAL) came forward. PALs were only programmable once, but their speed and efficiency was much improved from previous PLDs. The PAL was effectively used to replace a bunch of discrete logic elements in a circuit; you'd slap down a PAL instead of a dozen discrete TTL (transistor-transistor logic) devices. They used silicon antifuses that it would burn out to get a particular configuration. Eventually a couple of companies came up with ways to make rewritable verisons of these and called them fun acronyms like GALs (Generic) and PEELs (Programmable Electrically Eraseable Logic).

The next evolution was the CPLD. In this chip, you pretty much got a bunch of PALs connected together with some smart circuitry. The most important step in the evolution was probably that it would take serial input from a computer and have an on-board circuit parse all of that into programming for its cells. This made it possible to program thousands to tens of thousands of gates. Until recently, the big difference between CPLDs and FPGAs was that CPLDs were non-volatile, but now that many FPGAs can self-start and that almost all CPLDs are rewritable, the distinction is less clear.

Which brings us to the currently-sexy FPGA. FPGAs can have thousands to millions of gates that it can have reconfigured. These things are quite fast and very useful. I can't really speak for CPLD usage, but Wikipedia claims that they are about as useful as FPGAs and that the choice between them tends to be more about money or simply personal choice than actual technical difference.

These more complex devices are great because you get to abstract layers away. You no longer have to worry about the little gates or sometimes even the big muxes or many other things. You can simply program things like case statements and see the magic happen. I will admit that I'm occasionally less excited about these things when tried to debug them but *shrug*. They're pretty awesome.

That brings us fairly up to date on programmable logic devices and reconfigurable hardware. I'll talk about what's in a fancy new FPAA in my next post and you get to hear about some its ups and downs...

Monday, May 5, 2008

Spring cleaning

I had far too much stuff on my laptop. I'm sure I still do. I just reimaged because lappy had a fatal... uhmmm... something yesterday. I almost started to go on an installing rampage. But then I chillaxed.

I don't need that much stuff.

Firefox (with 2 extensions and some chrome editing for the look and vertical tabs)
Skype (with Twype to get twitter updates via skype)
Taskbar shuffle (because it's awesome)
Launchy (because it's awesomer)
Notepad++ (because it's not terrible like notepad)
Zulupad (because there's this one file I need for my OSS)
VLC (video and music)
PrimoPDF (pdf printer)

To be fair, the computer comes with office and photoshop and matlab and lyx. So maybe I'm not exactly a minimalistic user, but my start menu is now a single column instead of three so I'm gonna consider it significant. To be fairer still, I will be backing that up with portable apps that let me get isos, burn discs, rar things etc. etc. I wonder how long it'll last?

Wednesday, April 2, 2008

On responsibility for actions

I know that a lot of my philosophical babble is just that - babble. And a lot of the rest is common sense. Well, today we will be more or less in the common sense category.

There are lots of levels of responsibility for actions. For a long time, I've valued honor (in the old-fashioned sense) and believed that one should always accept responsibility for their actions.

But I think I can do much better.

Sometimes, I do better and claim responsibility for my actions. This is better in that responsibility is taken for an action without needing someone to effectively call you on it.

Now here's the common sense part. Why is it better to claim responsibility for an action than to accept it? Because it's earlier and more considered.

That can be expanded easily. One could take responsibility while doing an action. Or even further - one could be responsible for a decision to act.

I think people claim (myself included) that they're responsible for their actions when they are not. In point of fact, most people tend to (at best) accept their actions post-facto. It seems rather obvious and common-sensical - banal even - but I've got a lot of work to look forward to if I'm serious about being responsible.

It boils down to the difference between these:

  • Yeah. You're right. My decision to continue got him killed.
  • I'm sorry. He got killed because of my decision to continue.
  • My decision to keep going could get somebody killed.
  • Given the possibility of someone getting killed if I choose to continue, I believe we should keep going.
I have a long ways to go.

Monday, March 31, 2008

Organization

So I was organizing a bunch of papers I have and added categories to the whole thing.

I'm rather amused at how I function nowadays and thought I'd share. For each paper I have something in a text file like this:

Minch99
-Reconfigurable Translinear Analog Signal Processing
-21
-For OSS
-Talks about MITEs and actually implements log-domain filters with them
>TL, FILT
This is the file name, title, page count, original purpose, summary and categories.

The part I find amusing is that it would take me probably less than 5 minutes to make a python script that can pull things out by a variety of factors. It would take me like 20 minutes to make an appropriate database and an AJAXy setup for looking through it. I like that I naturally set things up in a way that makes it easy to port like that. I'm also amused that I choose to just scroll through my text file. Oh well.