Wednesday, July 23, 2008

Moving Blog site

I'm moving my blog to wordpress and my own site. It's new location will be http://www.ceaselessone.com/student. It should look very familiar as I tried to make it pretty much the same and exported everything... Some minor changes were inevitable :(

The new site should be on Planet Olin soon. Peace Blogger!

What I'm doing this summer - part 3

So far I've talked about reconfigurable hardware in general and then the specific set-up of the FPAA. Now I'll start talking about what's actually inside the CABs (computaional analog blocks).

First there are simple transistors. There's both nMOS and pMOS varieties and you have access to their gate, source and drain - fairly straightforward. There are capacitors where you have access to both leads. There are some current mirrors which make it simple to replicate a current without having to go through the process of mirroring the current yourself at the transistor level. Then there are a couple of more complex parts. These are the Gilbert multipliers and the OTAs.

Gilbert multipliers are probably the first use of the translinear principle (TLP). In short, this concept allows you to multiply and divide currents by using logarithms and then addition and subtraction and then exponentiating the result again. I'd explain it further, but I actually made a wiki page for my OSS (Olin Self-Study) that I really like. There's actually a two-quadrant multiplier at the end that should give you a good idea of how these things could work. If some parts of the wiki seem difficult to understand due to the writer, please let me know and I can hopefully fix it up. Or feel free to fix it up yourself if you're familiar with TLP. Gilbert multipliers are great because they allow you to directly multiply two signals at substantially faster-than-digital speeds with much less power usage and with orders of magnitude fewer transistors. The catch you ask? Well you can only really trust about 6 bits of a signal and then multipliers love to overflow. So your input signals can only be about 3 bits if you want to guarantee that you don't get overflow... There are workarounds to maintain precision and speed, but they involve adding parts and complexity. This of course makes you use more power, but you still get orders of magnitude more efficiency than a digital multiplier.

This leaves just one part. The OTA (operational transconductance multiplier). These are actually fairly simple and extremely powerful. They have a differential voltage input and output a current that's proportional to the difference in the input voltages. As a simple example, let's do a voltage follower. Put your signal on the positive input. Now tie the negative input to the output. The output will now source or sink current until the output equals the input. This is somewhat similar to an op-amp, but different because the output is a current instead of a voltage.

OTAs are actually quite powerful and provide a fairly straight-forward way to go from a transfer function or differential equation to a circuit. I'll show you that in my next post...

Monday, July 21, 2008

What I'm doing this summer - part 2

All right. Now that you've taken a quick view of reconfigurable hardware, we can look at the FPAA in particular. How is it different from an FPGA? Well the main difference is how information is encoded.

Instead of passing information as digital ones and zeroes, analog processing allows you to take advantage of the full available voltage range and even curents to carry information. This means you can do great things like carry 5-9 bits of information on a single wire. So what's the catch? Well analog signals aren't as pretty as digital signals. If you only have to differentiate between 0 and 1, you're going to have a much easier time at it than if you have to be able to differentiate 2.4 and 2.5. And in some cases it gets much worse than that - if you're using a voltage as to control a transistor's gate, minute changes are vastly amplified. This means that you're accutely vulnerable to many things that digital systems barely have to consider.

Are your lines near each other? They'll have parasitic capacitances that'll couple them. Did you route it using a global line instead of 2 nearest-neighbor lines? You can expect different performance and in some cases even complete loss of functionality since the parasitic capacitance from routing is extremely significant. Are you in place that has a temperature? If so, you'll get thermal noise that'll mean that all of your signals have some randomness associated with them. These problems, especially the last one, are not easy to solve.

So why bother? Well, like I mentioned, you can carry more data on a single wire. Why does this matter? Well you can do things like run two relevant signals into a multiplier that uses only a handful of transistors and get out a direct answer. This is far faster and less power-hungry than the digital method of splitting your signal up to 1 bit per wire, running it into a many-stage multiplier that is far slower and takes hundreds of times more transistors and eats through way more power. Sound awesome? That's because analog is awesome - just difficult.

So how does an FPAA work? Let me start by describing the structure. There are two main parts to the structure, the routing grid and the CABs (Computational Analog Blocks). If we look at a single column, there are lines that run the length of the chip (global verticals), lines that run from one CABto a neighboring CAB (nearest neighbor lines), and lines that only run within a CAB (local lines). Each horizontal line is attached to something in the CAB. Thus, to connect components, you just have to attach two horizontal lines to the same vertical line. This is done by turning on switches. Additionally, there are horizontal lines that can handle inter-column connections in a similar manner.

The actual switches used in the FPAA are (who'd guess it) analog switches. They are actually capable of being partly on. This allows the connections to take part in calculations if you're clever enough to work them into your design. The way these switches work is by using floating gate transistors. Transistors can pass an amount of current that's controlled by their gate voltage - building up charge on a floating gate allows you to set a voltage and then simply leave it alone without having to constantly source the appropriate bias voltage.

So what's in the CABs? Well this actually varies. Some of the common things in CABs are nMOS and pMOS transistors, capacitors, OTAs (Operational Transconductance Amplifiers), Gilbert Multipliers and current mirrors. What are all of these? I'll give some explanation in my next post.

What I'm doing this summer - part 1

So I'm working with FPAAs this summer. What's an FPAA? Well, it's not too inaccurate to say that they're like FPGAs except analog... But in case that doesn't mean much to you, we'll take a quick walk down the:

History of Reconfigurable Hardware
Many moons ago, somebody thought of the idea of using a read-only memory (ROM) as a simple programmable logic device (PLD). The idea here is that a memory maps an address to an output. Let's say we have n address bits. There are 2n possible logic functions using these inputs. A ROM will have an m output bits. Of the 2n possible logic functions, we can consider that our memory mapping is effectively implementing m of the logic functions. If we think about it this way, it becomes evident that a ROM will allow us to implement m logic combinations of n inputs and can be programmed to implement any logic combination of our choice. If you've heard of PROMS (Programmable), EPROMS (uv Erasable) or EEPROMS (Electrically Erasable), this is the concept behind them. They're perfectly general, but they're slow, power-hungry and even glitchy during transitions.

In the late 1970s another methodology that used programmable array logic (PAL) came forward. PALs were only programmable once, but their speed and efficiency was much improved from previous PLDs. The PAL was effectively used to replace a bunch of discrete logic elements in a circuit; you'd slap down a PAL instead of a dozen discrete TTL (transistor-transistor logic) devices. They used silicon antifuses that it would burn out to get a particular configuration. Eventually a couple of companies came up with ways to make rewritable verisons of these and called them fun acronyms like GALs (Generic) and PEELs (Programmable Electrically Eraseable Logic).

The next evolution was the CPLD. In this chip, you pretty much got a bunch of PALs connected together with some smart circuitry. The most important step in the evolution was probably that it would take serial input from a computer and have an on-board circuit parse all of that into programming for its cells. This made it possible to program thousands to tens of thousands of gates. Until recently, the big difference between CPLDs and FPGAs was that CPLDs were non-volatile, but now that many FPGAs can self-start and that almost all CPLDs are rewritable, the distinction is less clear.

Which brings us to the currently-sexy FPGA. FPGAs can have thousands to millions of gates that it can have reconfigured. These things are quite fast and very useful. I can't really speak for CPLD usage, but Wikipedia claims that they are about as useful as FPGAs and that the choice between them tends to be more about money or simply personal choice than actual technical difference.

These more complex devices are great because you get to abstract layers away. You no longer have to worry about the little gates or sometimes even the big muxes or many other things. You can simply program things like case statements and see the magic happen. I will admit that I'm occasionally less excited about these things when tried to debug them but *shrug*. They're pretty awesome.

That brings us fairly up to date on programmable logic devices and reconfigurable hardware. I'll talk about what's in a fancy new FPAA in my next post and you get to hear about some its ups and downs...

Monday, May 5, 2008

Spring cleaning

I had far too much stuff on my laptop. I'm sure I still do. I just reimaged because lappy had a fatal... uhmmm... something yesterday. I almost started to go on an installing rampage. But then I chillaxed.

I don't need that much stuff.

Firefox (with 2 extensions and some chrome editing for the look and vertical tabs)
Skype (with Twype to get twitter updates via skype)
Taskbar shuffle (because it's awesome)
Launchy (because it's awesomer)
Notepad++ (because it's not terrible like notepad)
Zulupad (because there's this one file I need for my OSS)
VLC (video and music)
PrimoPDF (pdf printer)

To be fair, the computer comes with office and photoshop and matlab and lyx. So maybe I'm not exactly a minimalistic user, but my start menu is now a single column instead of three so I'm gonna consider it significant. To be fairer still, I will be backing that up with portable apps that let me get isos, burn discs, rar things etc. etc. I wonder how long it'll last?

Wednesday, April 2, 2008

On responsibility for actions

I know that a lot of my philosophical babble is just that - babble. And a lot of the rest is common sense. Well, today we will be more or less in the common sense category.

There are lots of levels of responsibility for actions. For a long time, I've valued honor (in the old-fashioned sense) and believed that one should always accept responsibility for their actions.

But I think I can do much better.

Sometimes, I do better and claim responsibility for my actions. This is better in that responsibility is taken for an action without needing someone to effectively call you on it.

Now here's the common sense part. Why is it better to claim responsibility for an action than to accept it? Because it's earlier and more considered.

That can be expanded easily. One could take responsibility while doing an action. Or even further - one could be responsible for a decision to act.

I think people claim (myself included) that they're responsible for their actions when they are not. In point of fact, most people tend to (at best) accept their actions post-facto. It seems rather obvious and common-sensical - banal even - but I've got a lot of work to look forward to if I'm serious about being responsible.

It boils down to the difference between these:

  • Yeah. You're right. My decision to continue got him killed.
  • I'm sorry. He got killed because of my decision to continue.
  • My decision to keep going could get somebody killed.
  • Given the possibility of someone getting killed if I choose to continue, I believe we should keep going.
I have a long ways to go.

Monday, March 31, 2008

Organization

So I was organizing a bunch of papers I have and added categories to the whole thing.

I'm rather amused at how I function nowadays and thought I'd share. For each paper I have something in a text file like this:

Minch99
-Reconfigurable Translinear Analog Signal Processing
-21
-For OSS
-Talks about MITEs and actually implements log-domain filters with them
>TL, FILT
This is the file name, title, page count, original purpose, summary and categories.

The part I find amusing is that it would take me probably less than 5 minutes to make a python script that can pull things out by a variety of factors. It would take me like 20 minutes to make an appropriate database and an AJAXy setup for looking through it. I like that I naturally set things up in a way that makes it easy to port like that. I'm also amused that I choose to just scroll through my text file. Oh well.

Tuesday, March 25, 2008

On the unforgivable

There are some mistakes that are easy to get past, but what happens when one makes a mistake that violates a principle that they consider important?

Sometimes, an action is unforgivable. If the another person makes this kind of mistake, it is possible to either let it sit or to be merciful and forgive the person. But this can't be easily done for oneself. If you forgive the unforgivable in yourself, it feels like you are changing your principles. It doesn't feel like mercy, it feels like failure. So how do you get past a mistake that matters?

The only answer I've been able to come up with is time. Just let it sit. It fades in memory. It becomes less important.

Maybe there's some rational way to say something like "I've acted 10 billion times and only 6 of these times were unforgivable mistakes, so this is a reasonable proportion." But my mind cannot work that way. For me, unforgivable is not forgivable. The interesting thing, from reading what I wrote above, is that I seem to think that the person who did something unforgivable can be forgiven, but the action itself cannot. That makes sense.

Any thoughts?

Sunday, March 16, 2008

Quietness Experiment

Some of you might have noticed a period starting two weekends ago and going through about last Tuesday where I was substantially quieter than usual. I was trying a bit of an experiment to see if I could be both quiet and passive as opposed to loud and borderline aggressive. It was hard, but also quite eye-opening.

I started on a Friday evening and pretty much just failed. I would just be frustrated for a while as I actively tried anything in order not to talk and then I'd eventually let my guard down for a second (usually because my willpower was being drained at incredible speeds) and I'd be back at full volume imposing my views on as many people as humanly possible. Not great. Also, I got sweet headaches from the failed effort of trying to be quiet.

This would be that pattern for a few days. Eventually, with a great deal of self-programming (mostly by a hybrid meditation/self-hypnosis thing I've had going since high school) and force of will, my volume decreased. I started listening to what others had to say. Mostly, I tended to agree with what loud-Boris would've had to say on the subject, but I simply didn't say it. Sometimes, I found myself noticing that there was more thought behind some of the things others said than I would normally see (as they would've been interrupted before it became evident). I found myself enjoying being quiet sometimes. It started being classified as a default state. I met a friend's girlfriend and was introduced as "This is Boris. He used to be loud, but... uhmmm... now he's not... anyways, he's a good kid." Another friend told me that my default volume was not only lower than my old volume, but actually lower than the average person's default volume. I was feeling pretty proud.

In addition to the volume thing, I was also doing the passive thing. This meant lots of things. For example, I had to wait at the dinner table until someone left in order to leave instead of causing an exodus myself. Something I found rather amusing was that many groups had very solidified roles for people. Any group that relied on me to start conversations had incredibly funny awkward silences. To the point where one group spent most of a conversation talking about how they wished the other loud person in the group was there so that they'd have something to talk about. It was hilarious. Oh. And frustrating. Did I mention that? Anyways, this went on for a while and I was debating keeping it. Ultimately, the goal was to reach a state where both my volume and my level of assertiveness were things I could dynamically modify. But until then, I'd need a default. I was actually considering making it quietness. I wasn't going to do passive, but I though I might go for assertive and quiet.

But then I decided to go back to a loud default. Why, you ask? Well, I seem to be unable to separate out the assertion from the volume easily. I started being really unhappy with myself when I'd occasionally decide to say something and then never say it because I wouldn't interrupt anyone. By the time a hole in the conversation popped up... well - it just wasn't relevant anymore. So I'm back to old, loud Boris.

*sigh*

Maybe I'll try separating out assertiveness and volume again sometime soon. It seems like a prerequisite for the end goal of having full, real-time control of both of these variables independently.

---------------------
On an entirely different note, I've had a headache since I got home. I think it's because of a lack of pressure. It's kind of hilarious really.

Sunday, March 9, 2008

Shorthand and the Major Memory System

So I ended up dredging up a couple of old pals today. In the past, I've learned a couple of forms of shorthand (Speed-Writing and Teeline) and I'd read about (and tried) a variety of memory systems. These have all been entertaining for a while and then been left by the roadside, but it has not been a waste.

I still remember Teeline just fine and occasionally use it to jot things that I'd rather not have others read or simply because I don't have room in a margin to write things out longhand. I'm certainly not faster at Teeline than normal writing anymore. (pdf of the Teeline system)

The Major System (wiki) is an infinitely generalizable peg memory system. The idea is that you link whatever you want to remember with a particular image that represents an index. The neat thing about this system is that each digit is represented by a consonant sound. For example, a 1 is a d or t sound. So my index for 1 is 'toe' and my index for 11 is 'dead.' If I want remember a list of scientific discoveries, for example (Times top ten list), where #1 is a method for making skin cells behave like embryonic stem cells - I could picture some skin from a toe being scraped off and grown into various organs.

Now here's my favorite part. Teeline uses only consonants. The Major system makes everything a consonant. I think they like each other. It certainly helps add an additional few elements of memory (I can remember writing out and seeing the Teeline symbol). More importantly, it got me excited. Time to relearn both. Maybe this time I'll stick with the major system. It is conceptually incredibly powerful. If anyone wants to practice either at Olin, let me know.

Wednesday, March 5, 2008

Brain Fodder from Confucius

Hey all. I've been reading some Confucian writings and I thought I'd share some thoughts I found particularly interesting. I do suggest reading them. Stopping and thinking about these can be quite rewarding.

  • Hold faithfulness and sincerity as first principles.
  • Have no friends not equal to yourself.
  • When you have faults, do not fear to abandon them.
  • The accomplished scholar is not a utensil.
  • To see what is right and not to do it is want of courage.
  • The mind of the superior man (Chun-Tsze) is conversant with righteousness; the mind of the mean man is conversant with gain.
  • Chi Wan thought thrice, and then acted. When the Master was informed of it, he said, "Twice may do."

Monday, March 3, 2008

Y'know those stories about accidental discoveries?

I think I've caught a glimpse of what that must feel like. Here's the first page or so of my report b/c it's just text. Feel free to check out the whole thing here.

The birth of a control system


As I start to type up this lab, I continually glance back at my levitating object in wonder. I also glance up at the whiteboard and think that math is truly beautiful. I think my controller can best be explained by a story.

I was attempting to figure out a way to get the step response of the original system. It was not going terribly well. I could add a step, but I couldn't see anything at all through the noise from the system moving just a little at equilibrium (I'm currently thinking that a levitated object is not the best way to get the starting point that I then disturb with a step). Anyhow, I tried to stabilize the system by using a plastic rod to hold things together at the set point. That got rid of the high-amplitude, low-frequency noise, but high-frequency was still killing me. So I tried just adding a low-pass filter so that I might be able to see the step response better despite the noise. There was sadly too much noise. At this point I started poking around for a bit and one of the things I tried was changing the system to open-loop for a bit. I then took a look at the signal coming from the hall effect sensor. Importantly, I did this by touching the output of the hall-effect sensor to the place where my probe was at the time (after the low-pass filter). Then things moved and I was confused (I hadn't meant to change anything by probing the sensor's output). In my attempt to figure out what was going on, I inserted the sensor output into the row with the probe and (I guess out of habit?) went to set up the levitated object. It stayed. It stayed for a long, long time. And it had learned damping. It was incredible. I drew up the circuit on the board. I did math. It makes sense. It's absolutely incredible. I love math. I should still do that step response and system characterization thing, but I'll write up the math for why this works and include a block diagram first. So my design process was rather lacking, but I did figure everything out and fully understand it post facto. I was planning on doing the same thing with active elements, but this is actually more elegant. I'm glad I happened upon it.

Friday, February 29, 2008

MIT OCW and a couple of revelations

First a neat tidbit. OpenCourseWare is neat. If you're an Olin student who likes complaining about not getting intro circuits taught to you, try just listening to the videos for MIT's 6.002 while you do other things.

--- Something a bit less about learning ---

I'm a very layered person. I have a pragmatic side, an idealistic side, an ideal side and a true belief side. Pretty much all of my views on these are irresistibly opposing (eg pragmatic capitalist vs. idealistic socialist). It's kind of great. Often it's just annoying. I rarely dip farther than my idealistic side, because it just doesn't come up - if it does, I'm just likely to get depressed. Things I recently learned (although I'd suspected the second one):

  • As it turns out, I'm actually sensitive to attacks on stuff deeper than my idealistic side.
  • Also, evidently my beliefs are such that I can seriously disturb people by simply mentioning them.
Sometimes I wonder.

Thursday, February 28, 2008

In case you were wondering what we do in estimation

We're doing problems like this one that I typed up for my portfolio (a deliverable for the IS). This one's actually a super-early one; some of the newer ones are more intersting IMO. Click through if you'd like the example problem to be large enough to read. Oh yeah - the small caps are kinda vocabulary terms. They're estimation techniques that we got from a text and that I define elsewhere in my IS portfolio. I'm going to assume you guys can figure it out, but I'll be happy to define them if comments make it clear that I am unclear.

Tuesday, February 26, 2008

That was weird

So I started the semester with 22 credits.

Then I dropped intermediate differential equations and swapped in Sci-Fi instead.

Then I asked for permission to overload to 24 credits so I could add a class that was still in the works - Analog Filters.

Then I was not really digging on Advanced Digital Systems... so I dropped that. I figured this would leave me with 20 credits. And SigSys NINJAing and, uhmmm, life. So I dropped that.

But now it looks like Analog Filters isn't going to happen. So I'll be back down to 18 credits. Which I find quite humorous.

I'm OK with all of this though - after all I'm doing my OSS. I have an arbitrary amount of (enjoyable) work to do whenever I please. Awesome.

Also. The new Smash Bros is coming out in March - bring it.

Friday, February 22, 2008

My OSS is pretty

I'm doing my OSS is the history of analog circuit design. So I've been studying the history of translinear circuits and the history of log-domain filters to date. Until this week, it was all just reading, but I just put up an article on Wikipedia that I really, really like. I was originally just planning to add or expand the history section of the circuit classes I studied (which I'll do for, say, op-amps), but as it turns out, translinear circuits and log-domain filters don't have articles yet. Or didn't. Now, translinear circuits have a sweet article and the one for log-domain filters is likely to be coming to a free encyclopedia near you.

In short. My OSS feels much more real now that I've done something. It feels extra real b/c it looks like a pro wiki article what with citations left and right etc.

:D

Wednesday, February 20, 2008

Motivation in classes

So this is is some correspondence after nobody in the Advanced Digital Systems class turned in their second lab on time. Our teacher offered help; I responded by suggesting that maybe the issue was really motivation and not technical stuff so much (although that issue also exists).

What motivates you to work in classes?

(Names are hidden b/c I don't like floating people's names on the internet - most readers know who's who ).

======================================================

Hi Boris,

I appreciate the honesty of your answer. What motivates
you in other classes?

-###

======================================================

Hey ###,

Hmmmm... I'm not sure I have a direct answer, but I'll just say things that seem relevant instead.

I'll start with a disclaimer. I /do/ work. I even enjoy independent work and big projects. I'm doing my OSS right now, I'm in my second Independent Study, I've done research for a couple of profs (although motivation actually /did/ kill one of those).

-----------------
On class types:
-----------------

One type of class is just problems. You do them you're done with them - great. This is a common set-up for math classes etc.

A more open-ended type is based around a big project. These tend to need self-motivation. I'm not really sure where it comes from, but I know I did a lot of research for my VLSI class starting months before anything was due because I thought it was absolutely awesome. That all being said, the end-game of actually making a chip was less than stellar for me (although I did finish) and Olin has yet to get a working chip back from this class (3 years). POE also has a similar feel to it and some groups take to it great while other groups flounder for a long time before getting things started. I'm not sure if this is relevant, but both of these classes do small labs at the beginning and then have one /huge/ project compared to ADS' large project every two weeks.

Then there's the type where things are very straightforward and linear. These can be arbitrarily hard, but there's always a clear problem that needs solving next (###'s circuits class or last year's Communications class with ### are good examples of this).

-----------------
Back to ADS:
-----------------

Now, ADS is mostly open-ended. I feel it could do with more of a path. Or, alternatively, more guidance. The class effectively asks students to reach a goal. In two weeks. Students have about a decade and a half's worth of practice procrastinating - this one is easy. Time goes by and the problem starts to seem intractable. Where does one start? Well. I know I have to do this tristate thing. I know I have to simulate the mouse. I know I have to read the PS/2 spec. So I don't do much at all instead. Yeah. I don't actually have a solution.

ADS labs are 2 weeks long - they are huge. They require /lots/ of debugging - in fact, debugging is likely the brunt part of the work. In short: they're big, they're frustrating, they have no clear starting point, and the work itself is mostly uninteresting (debugging). The carrot is great (it actually works and does something) but the path makes it look like it's not worth it.

So. If I had to guess. I really enjoy analog more than digital - that could be part of the motivation. There's also a class dynamic aspect of it where the entire class dreads the labs. So it could be the topic or the social structure. Here's one more thought. The projects are large, but I don't feel that I own them. They feel impersonal and difficult and that makes it hard to justify (as compared to big project classes where you design and implement your own idea).

Sorry I went on for so long,

-Boris

Tuesday, February 19, 2008

Numbers revisited

So I've actually been trying to use my number system and have decided some things are clunky. One change that was easy was a writing thing. I now do 3M4 (read 3mag4) and 5N3 (read 5neg3). That just makes things faster.

But the worst thing was actually pointed out to me by someone in my estimation IS. They said negative numbers were easy to confuse with negative orders of magnitude. I'm not sure what the best solution here is - I'm thinking the original setup is best, but ideas would be mighty useful.

Positive stuff is great:

Light year 15M9.5 m 9.5x10^15 m

I just love that. It feels great. But then we have negatives... here's the original set-up.

psi 4N1.5 Pa 1.5x10^-4 N/m^2

Now, that's a little hard to read b/c N and M don't look very different, but it's unambiguous. Here's a different idea to get rid of "neg" because it mucks things up...

psi -4M1.5 Pa

That would be great if it didn't have an issue with this next one. under my original scheme:

Electron charge -19N1.6 Pa -1.6x10^-19 N/m^2

But getting rid of "neg" leaves me kinda confused here....

Electron charge -(-19M1.6 Pa)

Yeah. Gross.

So currently the negative sign shows that the whole number is negative and the N shows that the magnitude is negative. Do people think this works well enough? Ideas for improvement?

Just a parting thought... would it be better with Pos and Neg instead of Mag and Neg? P and N are easier to see so it might be better like:
-19N1.6 and 15P9.5.

On Machiavelli

So I'm reading The Prince. I have yet to find it cruel or even intense. I tend to agree with Machiavelli on most things. I wonder what that speaks to - me or The Prince.

Wednesday, February 13, 2008

Blogospheric balancing

Lots of blogs tend to do the "OMG I'm so dead right now thing." I thought I'd try to balance the blogosphere out a bit with this. Apparently, robo-Boris (a more efficient version of myself) was here from Saturday to Monday. As a result, I don't think I have enough work to keep me busy tomorrow (Wednesday's are nearly mid-weekend for me and are usually used for catching up).
On a related note, I played single-player Halo on a projector screen today. When I stopped playing after a couple of hours, my head started hurting and I felt rather nauseous. The screen is too big...

Thursday, February 7, 2008

Relearning sleep

I was really tired today. I stayed up till 2:30ish helping kids in SigSys and then had to get up at 8 so that I could do the reading for my 10 o'clock class. Long story short - I decided I'd take a wonderful 20 minute nap.

I figured it'd been a while since I'd done these, so I gave myself 25 minutes before a single beep from my cell and 30 before my actual alarm. I figured I'd be good to go with 10 whole minutes to fall asleep. Wrong.

I got maybe 3 minutes of sleep.

So it's time to train. I should be able to fall asleep within a minute and wake up 30 seconds or so before my alarm rings. This is a skill worth having.

Practice is real easy, but somewhat frustrating.

  1. Set two alarms.
    1. One of them is for 21 minutes from now and is gentle. A single beep or something.
    2. The other is f'serious and is 22 minutes from now.
  2. Get in bed.
  3. Relax. Meditate or whatever. Sleep if you can.
  4. When you hear the beep: Get Up.
That's it.

It's annoying until you get good b/c you tend to just be in bed awake for 20 minutes. But one does get good with practice. After a bit, the second alarm will be completely useless and, for me, I started waking up about 30 seconds before the beep after some practice. This is what I did before starting biphasic freshman year and not losing half an hour from each time I went to bed was quite helpful seeing as how that'd be 20-something percent of my sleep time...

Wednesday, February 6, 2008

Asynchronous circuits

Most digital design uses clocks to ensure that everything works in the correct manner. The major idea here is that the system will be correct after everything has settled out (electrons travel through wires, transistors turn on, all the gates complete their functions). The major issue here is that our clock has to be slow enough that the slowest process in our circuit has time to finish. So what's the solution?

Asynchronous design. The idea here is that instead of having a clock tick once everything must be valid, each module can tell the next module when it's ready. In other words, the paradigm shifts from a global clock to many local 'handshakes.' I really want to just get to the cool parts - so hopefully this won't lose people...

Let's say there are two modules. A is the sender and B is the receiver. We'd like to get some data from A to B, but only once B is ready for it. So let's say A is ready to send. I'll run through two schemes...

Bundled Data scheme (with active sender):

In this scheme, A pulls the req wire high to let B know that the data line is valid. Then B uses the data and sets its ack line high. When A sees the acknowledge go high, it sets the request line back down to 0. B responds by setting the ack line to 0. So after all of this, the ack and req lines are back to 0 and all's good. It's worth noting that there could be any number (N) of data bits here for a total of N+2 wires.

So there's one issue with this. When A's data becomes valid, it sets request high... but what happens if B sees the req line go high before the valid data travels all the way down to be. These things all have finite speeds, so in the end we get designs that can move electricity faster down the data line(s) in order to get around the race condition.


DI Dual Rail
scheme (with active sender):

So here's a set-up that avoids the race condition entirely. Ass such, it qualifies as truly DI (delay insensitive). The idea here is that A has both the true line and the false line low - this is its null state and indicates it has no data. Now when A pulls a line high, B knows what the data is and that it's valid... ZOMG! Once B has used the data, it lets A know by raising the ack line. A responds by returning to the null state and then B responds by lowering its ack line. Now everything is back where it started and we avoided the race condition!

But wait. This is engineering. I wouldn't have bothered telling y'all about the bundled scheme if it was always worse. The tradeoff is in the number of wires. N bits will take 2N + 1 wires to implement in a DI dual rail scheme.

Neat.

Tuesday, February 5, 2008

A better way to say numbers

So this is actually an old thought, but I haven't really pushed it around much.


Numbers are really bad at being said. They're also not good at being thought about or manipulated mentally when they get big or small. They give information in all the wrong places. For example: seven-hundred-and-sixty billion has all of its good information in two places. Most of it is in billion. Then there's a lot in hundred and some in the fact that its seven hundreds... My point is: it sucks. Whoever designed numbers didn't do good design. (<-- j/k kids) So let's look at how this has been solved before - scientific notation. 7.6x10^11 Not bad. Especially because the brain handles the whole thing as a unit so it can get the 11 early on. The order of magnitude is by far the most important thing. Even better is 7.6e11. That has less extraneous stuff and says the same thing. But it's not meant to be spoken. Even so, if you say 7.6e11 it's much faster than seven-hundred-and-sixty billion.

But it could be better. The order of magnitude should be first. So what I like is inverting the order of magnitude and the fine grain number and saying the x10^.
Something like: 11mag7.6

And we could get rid of another couple of syllables by hitting small numbers with a contraction for x10^ -

As in a microsecond is 6neg second. Oh yeah. You could just leave out a number and have a 1 be implied.

Just to use it somewhere I'll copy over a problem from my estimation class. We're estimating the budget of Pasadena. We've found that Pasadena has about 4mag2 acres of land. We estimated the cost of land at $6mag/acre. And we're going with a property tax of 1%. So:

4mag2 acres * 6mag USD/acre * 2neg USD/USD = 8mag2 USD

I like that we the format of the number makes it natural to do the exponents first (which is the important data. I'm not sure how I feel about something like:

3mag2 * 1neg8 = 2mag16 = 3mag1.6

It should really get into the last form, but the second form is more natural. Meh.

Thoughts?

Reboot

So it's been a while.

This semester is looking to be seriously busy. I'm doing enough that Outlook has me booked for 30 hours a week of scheduled stuff. Just classes and SigSys ninja meetings really. Then I have to do the work for all those 30 hours of classes and grading for SigSys and stuff... Oh! And I've taken up a hobby! I'm baking bread multiple times a week so that should be exciting - that tends to happen concurrently with reading for something so it doesn't actually take that much virgin time.

Anyhow. Now that I'm busy I thought I should get back to this blog. After all, it exists to hold neat stuff that I learn and I'm currently learning lots of neat stuff.

So this semester will see some good stuff:

  • Advanced digital systems: We're making a video game console on an FPGA. I'm not sure how bloggable it'll be, but the class is certainly cool.
  • Estimation: This is an IS (independent study) I'm doing with a few other kids. It's a lot of order of magnitude physics estimation, dimensional analysis etc. This is a lot of neat stuff that should be fun to toss here - expect to see it a disproportionate amount.
  • History of Analog Circuit Design: This is what I'm doing for my OSS (Olin Self Study). I'm pretty much trying to see if I can answer the question "How the #$% did someone think of that?!"
  • 6 Books that Changed the World: This class is a half semester gig where we read and analyze a book a week. Books are: The Communist Manifesto, Uncle Tom's Cabin, Darwin, The Prince, The Art of War and a book of my choice which will likely be John Maynard Keynes' The General Theory of Employment, Interest, and Money. I'm planning on also reading Adam Smith's The Wealth of Nations but that might be far too optimistic. Oh well.
  • Controls: Really sweet class on a really sweet subject (feedback, transfer functions etc.) that is of questionable bloggability.
  • MAD VLSI2: This one probably won't be up here much. Neat stuff, but advanced enough to need more context than I want to provide here or you, dear reader, likely want to read.
  • Signals and Systems: I'm ninjaing this class with two other kids. This is neat stuff. I'll see if I can toss up concepty stuff here while avoiding the mathy stuff.
  • Intermediate DiffEQ: Yeah. Doubt this will be here ever. It's also the next half of the semester.
  • Bread: w00t!
So that's my semester. I'm pretty excited.