2011-08-25

scala instead of perl

I've used perl for years as a "better shell script", and found it perfectly natural and easy to express the things I've asked of it.
The other day I did something that seemed perfectly natural to me, I passed a list to a function that returned another list.
I expected this to work, as I would in any language.
I was testing each function separately and everything seemed fine.
So I started testing the script as a whole; that's when strange things started happening.
Lists starting containing too many values, and of the wrong kind.
So, I did some googling without really knowing what kind of problem it was; assuming the whole time that I'd written some simple semantic error or something.
What I didn't expect though was the real problem :
Perl doesn't support lists properly.

You can't have a list of lists, you can't pass a list to a function or return a list from a function.
You can have a list of list references however, or pass list references to a function, or return a list reference.
So this whole time, I'd just happened to be passing scalars around, thinking perl was a pretty straightforward language. The reality is that it isn't actually much higher level than C.

It was originally designed late eighties to early nineties, so it makes sense that the OO and functional aspects aren't as well thought out as newer languages. I guess the idea of collections being fundamental wasn't that big at the time either.

I've recently had a chance to use ruby again, and this time I've actually enjoyed it. Especially the meta programming aspect. As ruby was designed a fair bit later, it feels a lot more modern; and lists work as I expect !

I think from now on, (assuming I got a choice), I'll be using perl _only_ for very simple shell scripts, or not at all; and using more competent languages (like ruby) for anything of substance.

For this specific task, I've reimplemented it as a Scala script; and in doing so I've fixed a bunch of bugs in my original perl code.
It's also made the code run a bit faster, as well as made the code easier to change.
Scala's regex and system exec libraries aren't as succinct as perl's; but you'd have to make them first class features of the language (as perl has) to reach that goal.
It's still run as a script, so it won't be any different from the user's point of view.

learning breadth vs depth

This is the list of languages that I learned and used at university, when studying Computer Science
  1. C
  2. C++
  3. Java
  4. Python
  5. Perl
  6. Common Lisp
  7. Prolog
  8. Shell scripting
  9. motorola 68k assembler
  10. MIPS assembler
This is the list of languages that I learned and/or used at work, when developing corporate software
  1. Java
  2. Perl
  3. C++
  4. C
  5. Ruby
  6. BPEL ? (this doesn't really count)
The above lists are ordered by frequency of use.
Now it may look like I learned a hell of a lot more languages at uni than I have at work; But really we only had time to learn the surface of those languages and just enough to get the subject projects completed.
At work we may only use a small number of languages, but I've had to study them in greater depth. In addition, I've had to learn a lot of different frameworks and tools (especially in Java) to learn.

It's a matter of :
  • breadth vs depth
  • core-language vs tools and frameworks.

Is it a framework or a library?

Is it a framework or a library?
Paul Chiusano expresses a very similar sentiment in push-libraries-vs-pull-libraries

correct or configurable?

I would rather work on software that changes easily and does the correct thing, than the kind of overly configurable software described in http://www.thoughtclusters.com/2009/09/software-analysis-paralysis/ also http://www.thoughtclusters.com/2007/08/hard-coding-and-soft-coding/

I've worked on projects where the software was incredibly configurable. One was able to support over 11 different versions of over 30 different interfaces to external systems. You could reconfigure the core behaviour to a ridiculous degree.
The system was so configurable that finding a working configuration became a problem in development, and getting your hands on the actual correct configuration was practically impossible.
The users had the one gold configuration that they modified for each new version; development did the same.
Creating a complete configuration from scratch would have been impossible.
The way it was managed, and updated, the config was actually treated as part of the code base. It's just in a different language (external DSL). So the only benefit was being able to change parts in a live system, without compiling. Using an interpreted language would have the same effect.
So maybe this type of system should really be built from two languages, a compiled language for the performance critical sections, and an interpreted language for the more dynamic behaviours of the application.
The dynamic behaviour would not be something the average user would modify.
A third part of the application is the user defined configuration.

Games based on the Unreal engine (one of the most used 3D games engines), almost exactly follow the above layering scheme:
  • The core engine is written in C++.
  • The content of the game (and mods) is written in UnrealScript. (Which is compiled in this case rather than my proposed interpreted language).
  • The user configuration is stored in key/value text property files.
If the script layer was interpreted, then users would be able to change the game rules themselves; so it's understandable that they've gone with compiled.

Having this kind of layering would really help with the average Java programmers obsession with being able to make changes without compiling. This obsession is part of the reason that there are so many frameworks that require tonnes of xml config. To me it all stems from the fact that Java is compiled, and for most purposes the dynamic features require too much overhead to bother with.
On the JVM, Java could still be used for the compiled stuff, with groovy, JRuby or Jython being used for the middle layer, finally xml and property files for the config layer. The key is that the engine and first and middle layers should be able to call both ways, and should both have access to the config layer.
If reloading this middle layer at runtime is required, I'm sure it's possible in most interpreted languages. I've done something recently in ruby which did exactly that.

Another option would be to use a language such as Scala or a lisp, which can operate at all 3 layers.
Compiled statically typed code, interpreted script possibly in an internal DSL, and simple DSL for storing user config.

With this kind of architecture, it would be possible to have different base configurations available for different individual users, while having different middle layers for different classes of users.

hacker howto

http://catb.org/~esr/faqs/hacker-howto.html

Teach Yourself Programming in Ten Years

2011-07-05

Thinking Functionally

I've been thinking about software solutions more and more as pure functions.
This quote describes perfectly the way my thinking has changed:
In fact, you can think of any impure function as having three "steps": 1) An "input" side effect, Unit => B, a pure function (A,B) => C, and an "output" side effect C => Unit. It makes perfect software engineering sense to decouple these components - and that is exactly what is done in purely functional programming.
from actors-are-not-good-concurrency-model.

One side benefit is that the core logic of the functionality is now perfectly testable; as pure functions are the easiest thing in the world to write tests for.

2011-05-02

a project just for fun

I've started a project of my own.

I've decided to use a bunch of technologies I'm actually interested in, rather than the usual crap I have to use:
  • scala - general programming
  • gradle - build
  • mongodb - persistence
  • swing from scala - UI
  • vlc / winamp / wmp - media playback
  • lift - web interface, if I get around to it
I've used scala before, but just for experiments and tools.
I've been reading up on gradle, as I've been using ant fairly extensively. I am not a fan of maven, but I do like the ideas behind the dependency management.
I've used swing a fair bit before, and the application will have a fairly simple interface. I thought about trying out SWT, but I wasn't sure how it would integrate with scala.
I wanted it to work with any of the media players on my PC.
There'll probably be some sort of admin interface, if I get around to it done in lift. Lift seems to be the obvious choice in scala. I've been told that it's very different from struts, which I have used and am not a fan of.

The biggest technology leap has been moving from an SQL db to mongodb. I've heard a fair bit about nosql, and it sounded like a much better persistence mechanism for a small project like this one.

The difference between mongodb and sql is bigger than people make out.

It's similar to the difference between a strong/statically typed language and a strong/dynamically typed language.

With both, you need to be sure of what type you are expecting.
With the strong/statically typed language (sql), you have to be very careful with the exact data structures that have been defined, and be sure that the exact nullable/mandatory/empty list rules match between the data type and what the code assumes.
With the strong/dynamic  typed language (mongodb), each time you do a method call (query or update) you need to be aware that a method missing (eg, "mandatory" data missing), could actually happen with any action. The syntax doesn't allow you to just assume that something will work, you need to think about how the error cases will be handled.

With mongodb, there's no guarantees put on the data structure or content at all.
Just this simple difference,  puts a fair bit more of the data consistency checking onto the application programmer.

This trade off is perfect for a exploratory project like mine, which has a single client application. But in any data store that had multiple client applications, this trade off would be horrible to manage. There's no way that three different application could be consistent in their consistency checking, especially if any of them attempted to rectify data problems.

I've previously worked on an LDAP db, that had a provisioning interface used to ensure data consistency. All updates would go through that app, and reads would go directly to the LDAP interface.
I can imagine a similar configuration used for nosql data stores. A shared provisioning interface (library/REST/SOAP) that ensures the specific data consistency required for that application domain, instead of the SQL's one and only structure. That way the queries could make certain assumptions about the data structure.

I think that the flexibility of nosql, offers us a new way to abstract over data; which requires us to think differently about the best way of managing that data.

what is it you are really doing?

I was at a family event, and my very young cousin asked my uncle what my job was.

Instead of talking about computers or games or the internet, or any of those things that people normal fall back on when trying to explain what software engineering/development/programming is; he described the actual purpose. He said described it as :
Automating all the boring jobs, so that everyone can have a better life.
Which is a much bigger job than just making websites, games or anything like that.

It reminded me of the SICP lectures, where they broke down the title "Computer Science".
They talked about Geometry, as derived from the Greek for measuring the earth. Modern Geometry has nothing to do with actual measuring of the earth, it's just how the abstract modern geometry started out.

In the same way, Computer Science has nothing to do with computers; That's just how it's starting out.

What Computer Science (Which isn't really a science either, but I'll leave that for a later discussion) is really about is understanding process; thinking about how we do stuff.

This more abstract definition, feeds directly into my uncle's description of the true purpose of software engineering/development and programming.
It has nothing to do with computers, it's about working out how people do things, and automating (simplifying) them.
The upshot is:
  • If you don't know how to do it manually, you can't automate it.
  • Just because you're using a computer to do it, doesn't mean it's better than doing it manually.
Everyone knows what happens when the requirements are wrong, but people don't think enough about if the solution is actually an improvement.

There's been many times when people have shown me software solutions, that don't actually simplify (automate) the process at all. It a few cases, the software was actually more complicated than the manual process.
Those ideas always struck me as stupid, but I couldn't think of why it was fundamentally wrong.

You need to ask yourself, is this actually going to help?

2011-03-15

Async Scala

In my last blog post http://naedyr.blogspot.com/2011/03/atomic-scala.html I described a scala wrapper for java.util.concurrent.AtomicReference, which made shared mutable state easy to use from multiple threads.

The next step is actually creating separate threads.

The old java.lang.Thread class is how most people would do multiple threads in java; but this meant managing starting and stopping the thread and managing your own thread pools.
The java.util.concurrent.Executors class and related libraries was a big step forward. But maybe, just like AtomicReference, it's use wasn't exactly obvious. My favorite part of it was the java.util.concurrent.Future class; which allows a return value from a thread, as well as exceptions to be thrown from the thread; and caught by the calling thread.

What I really wanted though, was to be able to run any block of code in another thread, and not have to think about the specifics. So I put together this wrapper class in scala:

case class Result[T](private val future: java.util.concurrent.Future[T]) {
  private lazy val value = future.get()
  def await(): T = value
}
object Async {
  val threadPool = Executors.newCachedThreadPool()
  def async[T](func: => T): Result[T] = {
    Result(Async.threadPool.submit(new Callable[T]() {
      def call(): T = func
    }))
  }
  def await[T](result: Result[T]): T = result.await()
}

Full source available at : http://code.google.com/p/naedyrscala/

Here's some example usage, with a couple of variations on syntax:
val sum = async { (1 to 100000000).reduceLeft(_ + _) ; println("finished1")}
val sum2 = async { (1 to 100000000).reduceLeft(_ + _); println("finished2") }
val sum3 = async { (1 to 100000000).reduceLeft(_ + _); println("finished3") }
println("do something else")
println(sum.await)
println(await { sum })
println(await(sum3))

With this wrapper, the entire usage boils down to the async and await functions.

  • Pass any block of code to async, and it will run in another thread.
  • To get the result of that thread, call await.

I was inspired by the C# version of async/await, but this scala version is far more general and simpler to use.

If you use await directly on an async call, the effect is to run the block in another thread, but then wait on that thread in the current thread, which is pretty pointless:
val sum = await{async {
      (1 to 100000000).reduceLeft(_ + _)
     println("finished3")
    }}
println("do something else")
println(sum)
You can have other threads await on each other. I don't think it's possible to create a deadlock situation with these semantics.
val sum = async { (1 to 100000000).reduceLeft(_ + _) }
val result = async {
      val x = await(sum)
      println("callback " + x)
      x
}
println(await(result)+await(sum))
If you run the below code several times, you'll see differing numbers and orderings of the "+/- amount comments", but the final amount, after the awaits should always be the same.
case class Account(private val initialAmount: Int) {
  val balance = Atom(initialAmount)
  def withdraw(amount: Int) = {
    balance.set { x => println("-" + amount); x - amount }
  }
  def deposit(amount: Int) = {
    balance.set { x => println("+" + amount); x + amount }
  }
}
val account = Account(500)
   val results1 = async {
   account.deposit(100)
   account.withdraw(100)
}
val results2 = async {
  account.withdraw(10)
  account.withdraw(10)
}
val results3 = async { account.withdraw(10) }
await(results1)
await(results2)
await(results3)
assertEquals((500 + 100 - 100 - 10 - 10 - 10), account.balance.get)

When combined, async/await and atom provide some very easy to use libraries for doing shared state multi threaded programming; with a minimum of effort.

Both the atom and async/await are made possible due to closures. I can't wait to Java 8.

2011-03-11

Atomic Scala

A while ago I had to write a test tool that needed to do performance tests.
It needed to be able to open up a whole bunch of connections, send requests, then gather statistics on the responses.
I used Java and the java.util.concurrent libraries, and they worked great. I didn't have to worry about synchronized blocks or waiting, no deadlocks or starvation.

The thing is, I haven't even heard of anyone using these libraries since then.

Most of the Java code I see still does concurrency the way it's been done since the old days, and completely ignores the newer Atomic*, Executors, ThreadPools and Futures.

Maybe people aren't aware of these libraries?
Maybe the syntax seems to complicated?

I wondered how much nicer these libraries could be if they were used from scala; so I wrote a few little wrapper classes that gave the java.util.concurrent libraries a more scala styled interface.

I starting out with AtomicReference, which is a way of having shared mutable state between threads. The beauty of this class, is that it uses an optimistic sychronisation mechanism; so that reads don't block at all, and writes trigger a retry in the case of a collision. I think it uses a kind of Software Transactional Memory underneath.
It has a different performance profile to lock based sychronisation detailed in http://www.ibm.com/developerworks/java/library/j-jtp11234/.

Here's my Atom wrapper, full source code available at http://code.google.com/p/naedyrscala/

case class Atom[T](private val value: T) {
  private val ref = new AtomicReference[T]()
  ref.set(value)
  def get(): T = ref.get()
  def set(f: T => T): T = {
    val previous = get()
    val update = f(previous)
    if (ref.compareAndSet(previous, update)) {
      update
    } else {
      set(f)
    }
  }
  def set(f: => T): T = set(previous => f)
}

get() just gets the current value (and is NOT blocked at all by other threads).
There are two versions of set, the first sets a new value, ignoring the previous value. It takes a block of code, which it will rerun if there are any collisions, (ie if another thread is simultaneously running set on the same atom.
The second version of set takes a block which takes the previous value, allowing you to for example increment the current value.
Both of the set methods return the value that you have set, which is different from calling get after the set; as another thread may have already changed the value by then. This allows you to have a unique key generator for instance, by incrementing the current value and using the returned result as your unique key.

Here's some tests to show the usage, with a couple of variations on syntax:
val myAtom = Atom(5)
myAtom.set(5)
assertEquals(5, myAtom.get)
myAtom set 5
assertEquals(5, myAtom.get)
myAtom.set(_ + 3)
assertEquals(8, myAtom.get)
myAtom set { x =>
  x + 3
}
assertEquals(8, myAtom.get)

One catch to the atomic reference, is to make sure that any code that is used to generate a new value is inside the block passed to the set method.
In this example, the new value is not being computed inside the set method, which means that the value doesn't take into account any concurrent changes to the myAtom value.
val myAtom = Atom(5)
// intended to be 1
val newValue = if (myAtom.get == 5) 1 else -1
// newValue has already been set, therefore can't be retried
// image if some other thread then set the value
myAtom.set(8)
// this set is setting the 1 we got earlier, even though the old value isn't 5 anymore
myAtom.set(newValue)
assertEquals(1, myAtom.get) 
This example shows how the above should have been done, it now responds to the (concurrent) setting of myAtom2 to '8'
val myAtom2 = Atom(5)
myAtom2.set(8)
// now this set is taking in to account the 8 set earlier, and we get -1 as expected
myAtom2.set(x => if (x == 5) 1 else -1)
assertEquals(-1, myAtom2.get)

Another limitation is that the set methods don't nest.
val atom1 = Atom(1)
val atom2 = Atom(2)
// the transactions don't nest properly,
atom1.set { x =>
  val value = atom2.set { y =>
    x + y
  }
  // if there is a collision with atom1 here, 
  // atom2 will have an inconsistent value, until atom1's set retry succeeds
  value * 2
}
assertEquals(6, atom1.get)
assertEquals(3, atom2.get)

I originally played around using the apply method, but the syntax became a bit too minimalistic, ie
val myAtom = Atom(5)
myAtom(_+1)
assertEquals(6, myAtom())
Another variation which I liked, but was also a bit too confusing, was using a member variable, ie
val myAtom = Atom(5)
myAtom.value = _ + 1
assertEquals(6, myAtom.value)

Let's have a look at how this stuff is normally used from Java.
AtomicReference atom = new AtomicReference(1);
while (true) {
  Integer previous = atom.get();
  Integer update = previous + 2;
  if (atom.compareAndSet(previous, update)) {
    break;
  }
}
assertEquals(3, atom.get().intValue());
VS
val atom = Atom(1)
atom.set(_+2)
assertEquals(3, atom.get)
No wonder no one uses this AtomicReference in Java ! Just setting a value correctly is a real pain.

2011-02-07

software that changes easily

I just read a blog : http://www.thoughtclusters.com/2011/01/software-for-multiple-customers/,
which talks about making software easy to change.

I think what he misses is that there are many levels at which the software can be made easy to change.
I think the level he is talking about is writing generic code, which allows developers to change the database implementation, or writing various configurable options.

If you do these things, it will take longer to go to market as the writer remarks.

You can however do other things, which will make the software easy to change.
  • A programming languages that allows changes to be made easily and safely
  • A powerful SCM tool that the developers actually know how to use.
  • tests that actually matter to the functionality of the product
  • A release process that has a fast track for critical changes
  • A lightweight release process
All of these things will make it easy to change your software, and therefore responsive to the customers needs.

2011-02-02

BPEL is not programming

In the brilliant MIT lectures (and book) Structure and Interpretation of Computer Programs, they talk about how to rate the worth of programming languages.

The two criteria are abstraction and combination.
  • Simply put, a good/powerful/useful language will allow you to use abstraction to simplify complex processing; ie hiding the details, while making the processing more general.
  • A good language will allow you to combine the language components in meaningful ways, that represent the relationship of those components. Good combination allows you to combine the user defined abstractions in the same ways as the built in language components. 
  • When abstraction and combination feed off each other, when you can abstract on the combination and combine abstractions; this is when you have a great language.
As this SOA guru says, BPEL isn't really programming.

The only kind of abstraction at all, is making a whole new service. Using the JDeveloper BPEL view, you can hide the details of the verbose xml source code; But then all you see is an overly simplified 'pictures of boxes' representations. For example an "assignment" block, with an optional user defined name. Simplified, yes; but not actually abstracted.
The lack of abstraction affects what little combination options there are.
BPEL gives you variables with XML types tied to your schema, other services which you can call, variable assignments, java code blobs, xsl transform, some simple exception handling, scoped blocks, and very simplistic control structures (if/else and loop), as well as a parallel block.

That may seem like a reasonable set of built in features, in particular the parallel block, but without subroutines or functions of any sort, without structures or objects; the kinds of combination are simplistic rather than simplified.
Each basic component can only be combined in prescribed ways with certain other built in components. This means that for example, the only data type is XML variable, so all of the transforms and assignments only operate on that one thing.
With no way to define new data types or functions, the kinds of combination are limited to the built in components, and the hard coded ways that they interact.

This language simplification doesn't even achieve the elusive idea of a "business person" being able to program in it. The overly simplistic nature of the language means that even programmers have a hard time with it.

A better way to attempt the non-programmer doing programming would be to develop an internal DSL, for the actual business domain. BPEL is not even close to something that high level. And it's not capable of building such a thing.

I don't think it actually deserves to be called a programming language.

It really does deserve the name BHell.

tablets are not taking over

Gah!, even paul graham has bought into the "tablets are taking over" bullshit.

I'm getting sick of this assumption that anything that apple brings out will be a complete success, and will change the computing world.

Tablets (whoever is making them) are just the next step for mobile phones in becoming more like the general computing device that most people have on their desk, or their lap.

The truth is, it was mobile phones that could do more than just make calls (ie feature phones), that really changed the mobile computing world.

According to wikipedia, in 2009 83% of all phones (in the US) were feature phones. It's only when a technology is ubiquitous, that its impact is fully felt.

And now, it's official; everyone in the world has a mobile phone, a fair number being feature phones: http://hothardware.com/News/ITU-Finds-2-Billion-Internet-Users-Worldwide-5-Billion-Mobile-Subscriptions/ . Ok not actually everyone (number of subscriptions != number of subscribers), but nearly one for every person.

It was when everyone started carrying a device around which enables one to take a video of an event, and send that video to all one's friends, or tweet to the world their every single thought and action; that's when things changed.

After that point, you are merely improving the user interface, adding more power, bigger screen, faster internet connection. It's all incremental (potential) improvement from there forward, not the giant leap forward that everyone seems to think it is.

I don't think that mobile devices which are deliberately crippled by having no call/sms/mms functionality, will have much of a future. This includes all "mobile" computers. It won't be long before "phones" have the computing power to do everything a netbook can do, and at that point, most people can do all of their computing on that one device.

These mobile computing devices will still just be called "phones" by most people, because whatever the device is, you'll still be carrying it around with you in your pocket like you carry your phone now. Even if you are using it for everything except making "telephone calls".

2011-01-19

code that will hack your mind

I've been reading an excellent book mind hacks on my kindle recently.

It describes the mind as a collection of parallel sub-processors which work together to form the conscious mind.
For instance there is a part of the brain which identifies shadows, and this processing is done very early in our visual processing. It is an over simplified process which helps to identify possible threats very quickly. A much more accurate process works on shape recognition, but this is also a slower process. This is a rational for people jumping at shadows, but quickly recovering.

The processing is done in parallel, but some results override others. A good experiment (illusion) to show this edge case is the colour and words recognition problem .
One sub-process of our mind is reading the words, (and normally brings the sound of each word into our mind as we read them).
Another part is recognising the actual colour of the text, and converting that colour sensation into a word.
In this case the illusion is caused by the sub-process that is reading the words overriding the colour recognition sub-process.
This collision shows the importance of these automatic processes in how we read and understand text.
There are other sub-processes which can collide with our word recognition, one of which is for recognising shapes. Our mind automatically looks for a meaningful shape in the image, which seems to have a white background and black foreground. It is only when our word recognition sub-process comes in that you can see the white word on black background. Once your mind has locked onto the word, it's difficult to see the amorphous black shape that you originally saw.

Now, we've all seen optical illusions before and it seems hard to believe that any of this affects you in your normal day to day life. These edge cases (illusions) are rarely seen in reality, mostly because when we're reading any substantial body of text (like this blog), it's pretty much always in black and white, and any images are nicely separated from the text. Even the structure of the text, the shape of each line and paragraph is pretty standardised. This ensures that our shape recognition sub-process doesn't interfere with reading, except when scanning through a body of text for a particular section. That's all true for almost all natural language that we encounter.

It is not the case for programming languages though.

This is the real reason that programmers get into so many arguments of tabs-vs-spaces and IDE wars over code auto-formatting, and syntax highlighting. It also explains why different languages are fought over so vehemently when it seems to outsiders like the similarities are greater than the differences between them. I want to show why all of this stuff really does matter.

It's all about hacking our minds.

The following points may seem stupidly obvious, or obviously wrong, but since there hasn't been any research into this that I know of, they're have to be assumptions.
  1. When reading code, the mind uses same processes as when reading natural language.
  2. Some part is trying to recognise whole words, and define the meaning of the word.
  3. Some part of brain is trying to say each word (token) out load, with punctuation either ignored or used to structure the rhythm of the word sounds.
  4. Some part is looking for meaningful shapes in the code, to determine relatedness and structure.
 This has some implications for source code.

2. implies that words (tokens) must have meaningful names, which sounds stupidly obvious; but it's something people still ignore. They must be obvious and simple names that we recognise instantly. Otherwise we work against the automatic word recognition our minds have.
3. implies that our source code should be pronounceable. Actually speak your code out loud. Does it make sense to someone else if you say it out loud? Does it flow properly, or does it have a stunted rhythm.
4. implies that blocks of code should have a meaningful shape. People are aware of this when it's done badly, but it's rarely seen as an image problem. Usually it's treated more like structuring paragraphs, when we should be using the source code and punctuation to show pictorially what the code means.

I would really like to have some code examples here, but that would actually require getting this stuff right. Which means lots of experimentation.

I will,

if I get time,

... later.

java design principals

The design principals behind smalltalk could easily describe the design principals behind java.

keyboard of the future

I've had a few years of very painful RSI in my forearms, due to working in a stressful office environment, spending most of my time at a desk with a mouse and keyboard.
In a constant search for things to help with the pain, I've gone through a long list of pointing and typing devices.
I've tried split keyboards, sideways mice, trackballs, rollermouse; even using a web cam to track head movement to control the mouse.
I was most recently using an ergonomic keyboard at work kinesis-freestyle with a extra touchpad and a microsoft ergonomic keyboard (at home).

after reading the-keyboard-cult  I thought I should try mechanical keyboards.
I looked at daskeyboard, happy hacker, miniguru (which may come out in kit form) and some others, but they were either very expensive or ugly.

Then I found the razer blackwidow.

I have a couple of razer gaming mice at home, and hard mice mats which are both great. So I thought I'd give the blackwidow a go.
  • It has mechanical switches (Cherry MX Blue) - clicky and tactile.
  • It is fully and easily programmable - I set up my dvorak+Caps is ctrl setup in a manner of minutes
  • It's all black - matte keys and gloss body
I'm very happy with it!

I'm not sure about the other mechanical keyboards available, but this one allows you to program every single key with a different key or macro for each profile (apart from the FN which is used to select profiles). It will even switch profiles automatically for different programs.

best part is, I got one from CPL for only $109 :D

I've heard some people complain about the noise level, but I wouldn't say it would bother anyone.
The feel is very much like a mouse click, and as a result I do think it makes a difference in how heavy handed my typing is.
I think that only pushing on the keys until the click is felt (and heard) should make a difference to my RSI as well, as it is more gently on the fingers.

Looks great, feels good, will save time (via macros) and doesn't cost $350 :P

I can't imagine going back to my old keyboards ..... I may just have to get one for home as well ;).

I've gone open source

I've finally started my own open source project!

The incredibly imaginatively named project naedyrscala

It has bits and pieces of experiments and examples that I've done in scala,
as well as some tools that I like using.

It includes the general memoization function defined in can-you-balance-space-and-time.

People don't understand Tail Call Optimization

Chad Perrin says in his  article about memoization that, 
"Tail call optimization ... optimizes a recursive function so that it will not perform the same operations over and over again.",
 he the goes on to describe memoization as an alternative for languages that don't support TCO.


This is not what TCO is at all.


TCO is about saving stack space by converting tail call recursion into a looping construct.
It has nothing to do with optimizing which operations are performed.


please read http://en.wikipedia.org/wiki/Tail_call so that you actually understand.


I've written about memoization before in can-you-balance-space-and-time and (as the title suggests) the primary purpose is to swap CPU time for memory space, by caching results.


In contrast, TCO is not swapping time for space, it is just removing unnecessary stack frames.


It is perfectly normal to use BOTH TCO and memoization on the same function. Here's some scala to demonstrate.


def factorial(n: Int) = {
    def loop(n: Int, acc: Int): Int = {
      if (n >= 0) acc else loop(n - 1, acc * n)
    }
    loop(n, 1)
}
assert(1 == factorial(1) )
assert(120 == factorial(5) ) 
val mfact = Mem(factorial(_))
assert(1 == mfact(1) )
assert(120 == mfact(5) )


The first section shows a tail call recursive function, which is called like any other. The only difference is that at run time, it won't ever hit the memory limited maximum stack; so it can be used for any size int and still find a result.
Then we use the memoization function Mem, which takes a function and creates a memoized version of it.
Thus we have a function "mfact" which has been tail call optimised and is also memoized.
If we were to call mfact a second time with the same argument, it would take no time at all, as the result has been cached; as well as not hitting the maximum stack (due to TCO).