2010-06-09
2010-06-02
java dead?
I don't think Java's dead
it's just dead boring
it's just dead boring
I can't not submit
There are times when I'm not allowed to submit.
I wish I'd made a branch to begin with
so I'm going to run my own subversion server on my local development machine.
visual SVN server
tortoise svn client
I wish I'd made a branch to begin with
so I'm going to run my own subversion server on my local development machine.
visual SVN server
tortoise svn client
2010-06-01
Can you balance space and time?
The balance between space and time is still relevant:
http://blog.tmorris.net/you-lazy-thunk/
In scala, there's the lazy keyword which will make an assignment lazy.
But I didn't know of anything that would trade space for time, ie caching.
I was reading about clojure (lisp on the java) programming-clojure ,
and they mentioned the built in memoization http://en.wikipedia.org/wiki/Memoization
So I decided to do my own implementation of memoization in scala.
def mem[A,R](func: A=>R) : A=>R = {
var cache = Map[A,R]()
(a:A) => {
if (cache.contains(a)) {
cache(a)
} else {
val result = func(a)
cache += a -> result
result
}
}
}
val double = (x:Int) => {
println("double "+x)
x*2
}
double(2)
double(2)
val memDouble = mem(double)
memDouble(2)
memDouble(2)
memDouble(4)
memDouble(4)
when you run this in the repl you get:
scala> memDouble(2)
double 2
res9: Int = 4
scala> memDouble(2)
res10: Int = 4
scala> memDouble(2)
res11: Int = 8
scala> memDouble(4)
res12: Int = 8
as you can see from the repl output;
When you call double, it executes the body everytime you execute the function. ie it prints double 2 each time.
The memDouble function is different however, the first time you call the memDouble function, it prints, but the second time (with the same arguments), it just gives you the value.
Using this mem function, we can automagically cache the results of any function that takes a single argument.
In order for it to work with functions of any arity (number of arguments), we need to implement different versions for each arity.
We can then wrap those functions up in a singleton object, to hide the arity details.
Here I've done it for 1 and 2 argument functions, it's pretty easy to extrapolate. The key to the cache map becomes a tuple of the arguments.
object Mem {
def mem1[A,R](func: A=>R) : A=>R = {
var cache = Map[A,R]()
(a:A) => {
if (cache.contains(a)) {
cache(a)
} else {
val result = func(a)
cache += a -> result
result
}
}
}
def mem2[A,B,R](func:(A,B)=>R) : (A,B)=>R = {
var cache = Map[(A,B),R]()
(a:A,b:B) => {
if (cache.contains((a,b))) {
cache((a,b))
} else {
val result = func(a,b)
cache += (a,b) -> result
result
}
}
}
...
def apply[A,R](func:A=>R) = mem1(func)
def apply[A,B,R](func:(A,B)=>R) = mem2(func)
...
}
object Test extends Application {
val mult = (x:Int,y:Int) => { println("mult"); x*y }
val double = (x:Int) => { println("double"); mult(x,2) }
val hello = (x:String) => { println("hello"); "hello "+x }
val memDouble = Mem(double)
memDouble(2)
memDouble(2)
memDouble(4)
memDouble(4)
memDouble(4)
val memMult = Mem(mult)
memMult(2,3)
memMult(2,3)
memMult(4,4)
memMult(4,4)
val memHello = Mem(hello)
memHello("me")
memHello("me")
case class MyObj(val value:Int) {
def multiply(x:Int) = x*value
}
val memMyTriple = Mem(MyObj(3).multiply(_:Int))
memMyTriple(2)
memMyTriple(2)
}
As you can see, it's not just for basic functions, it also works with methods on objects as well.
I've found these functional concepts fairly straightforward to implement in scala, the only issue being with different function arities.
This is an effective way to use space in order to save time, while leaving your source code relatively unaffected.
http://blog.tmorris.net/you-lazy-thunk/
In scala, there's the lazy keyword which will make an assignment lazy.
But I didn't know of anything that would trade space for time, ie caching.
I was reading about clojure (lisp on the java) programming-clojure ,
and they mentioned the built in memoization http://en.wikipedia.org/wiki/Memoization
So I decided to do my own implementation of memoization in scala.
def mem[A,R](func: A=>R) : A=>R = {
var cache = Map[A,R]()
(a:A) => {
if (cache.contains(a)) {
cache(a)
} else {
val result = func(a)
cache += a -> result
result
}
}
}
val double = (x:Int) => {
println("double "+x)
x*2
}
double(2)
double(2)
val memDouble = mem(double)
memDouble(2)
memDouble(2)
memDouble(4)
memDouble(4)
when you run this in the repl you get:
scala> double(2)
double 2
res13: Int = 4
scala> double(2)
double 2
res13: Int = 4
scala> double(2)
double 2
res13: Int = 4
double 2
res9: Int = 4
scala> memDouble(2)
res10: Int = 4
scala> memDouble(2)
res10: Int = 4
scala> memDouble(4)
double 4scala> memDouble(4)
res11: Int = 8
scala> memDouble(4)
res12: Int = 8
as you can see from the repl output;
When you call double, it executes the body everytime you execute the function. ie it prints double 2 each time.
The memDouble function is different however, the first time you call the memDouble function, it prints, but the second time (with the same arguments), it just gives you the value.
Using this mem function, we can automagically cache the results of any function that takes a single argument.
In order for it to work with functions of any arity (number of arguments), we need to implement different versions for each arity.
We can then wrap those functions up in a singleton object, to hide the arity details.
Here I've done it for 1 and 2 argument functions, it's pretty easy to extrapolate. The key to the cache map becomes a tuple of the arguments.
object Mem {
def mem1[A,R](func: A=>R) : A=>R = {
var cache = Map[A,R]()
(a:A) => {
if (cache.contains(a)) {
cache(a)
} else {
val result = func(a)
cache += a -> result
result
}
}
}
def mem2[A,B,R](func:(A,B)=>R) : (A,B)=>R = {
var cache = Map[(A,B),R]()
(a:A,b:B) => {
if (cache.contains((a,b))) {
cache((a,b))
} else {
val result = func(a,b)
cache += (a,b) -> result
result
}
}
}
...
def apply[A,R](func:A=>R) = mem1(func)
def apply[A,B,R](func:(A,B)=>R) = mem2(func)
...
}
object Test extends Application {
val mult = (x:Int,y:Int) => { println("mult"); x*y }
val double = (x:Int) => { println("double"); mult(x,2) }
val hello = (x:String) => { println("hello"); "hello "+x }
val memDouble = Mem(double)
memDouble(2)
memDouble(2)
memDouble(4)
memDouble(4)
memDouble(4)
val memMult = Mem(mult)
memMult(2,3)
memMult(2,3)
memMult(4,4)
memMult(4,4)
val memHello = Mem(hello)
memHello("me")
memHello("me")
case class MyObj(val value:Int) {
def multiply(x:Int) = x*value
}
val memMyTriple = Mem(MyObj(3).multiply(_:Int))
memMyTriple(2)
memMyTriple(2)
}
As you can see, it's not just for basic functions, it also works with methods on objects as well.
I've found these functional concepts fairly straightforward to implement in scala, the only issue being with different function arities.
This is an effective way to use space in order to save time, while leaving your source code relatively unaffected.
Labels:
cache,
clojure,
functional,
lazy,
memoization,
programming,
scala,
space,
time
2010-05-26
The paradigmatic chasm between programmers and normal humans is huge.
I was just reading a blog entitled The Myth of the Super Programming Language and came across an astonishing comment; partially reproduced here, (as I can't link to the comment itself)
Wow.
I think I now understand how there is a market for apple products.
I truly never understood before.
...
Here’s a case study. I once asked our secretary to backup our shared mini-computer in the morning, before the rest of the company showed up. I wrote down, on paper, a simple set of instructions, that covered every case I could think of. For example, I wrote down “Go to any terminal. If you see c:> on the screen [ed. i.e. a developer forgot to logout before going home the night before], type ‘logout’. When you see ‘login:’ type ‘operator’, and then when you see ‘password:’ type ‘yyy’. Then type ‘backup’.”
I came in one morning and found the following on EVERY screen in the office:
c:> logout
login: operator
password: yyy
c:> logout
login: operator
password: yyy
c:> logout
login: operator
password: yyy
…
I had to ask her why she did this – being a programmer, I couldn’t understand what was in her mind. It turned out that the concept of ’sequence’ was not bred into her thought process. In essence, she treated the instructions in a declarative, pattern-matching manner. After every action, she re-scanned the whole sheet of instructions and picked the closest match, instead of treating the instructions in a sequential manner as I had intended. Every time she successfully logged in as operator, the closest match on the sheet of instructions was “if you see C:>, type ‘logout’”, so she did that. Again and again.
The paradigmatic chasm between programmers and normal humans is huge. We can’t even recognize that programmable computers are a burden to the general population, not a boon.
What we need are programming languages that allow software engineers to produce products – encased in epoxy – that provide solutions to specific problems. Whether we choose to use programmable computers inside the epoxy is our own problem, not the customers’.
Wow.
I think I now understand how there is a market for apple products.
I truly never understood before.
2010-02-20
Disagree and agree so strongly
I watched a strange documentary type program recently:
zeitgeist
It goes into a lot of different ideas, about a whole heap of different (sometimes completely unrelated) ideas.
One thing that I liked in it was that there is some factual truth to the christian bible. That basis being that all of the stories and characters in it are direct representations of astrological signs and their astronomical connections.
That was about it though.
zeitgeist
It goes into a lot of different ideas, about a whole heap of different (sometimes completely unrelated) ideas.
One thing that I liked in it was that there is some factual truth to the christian bible. That basis being that all of the stories and characters in it are direct representations of astrological signs and their astronomical connections.
That was about it though.
duct tape programming
Everyone thinks it at one point, "It's ugly, but it will work" and the next though is "I'd like to make it better, but I don't have time right now".
Now it has a name: duct tape programming
It's always a trade off, between two extremes:
wasting time on something that already works
vs
writing something that will fall apart with the next modification (or in a different environment)
It's important to be aware of the decision you are making.
Now it has a name: duct tape programming
It's always a trade off, between two extremes:
wasting time on something that already works
vs
writing something that will fall apart with the next modification (or in a different environment)
It's important to be aware of the decision you are making.
Subscribe to:
Posts (Atom)