Create the RAM disk:
diskutil erasevolume HFS+ "ramdisk" `hdiutil attach -nomount ram://1165430`
(the magic number 1165430 is explained here as the number of 512 byte sectors. The string “ramdisk” is just the name of the disk, which will appear under /Volumes after creation)
Write some stuff to the RAM disk:
echo "hello RAM disk" > /Volumes/ramdisk/hello.txt
Eject the RAM disk:
diskutil eject /Volumes/ramdisk
Gathered from here and here so that I remember for later.
Found this guide for creating a RAM disk on Linux, but have not tried it yet.
Prepare the database by installing PL/Python and a function:
CREATE EXTENSION IF NOT EXISTS plpythonu;
CREATE OR REPLACE FUNCTION hello_lp()
output = subprocess.Popen (
stdout = subprocess.PIPE,
stderr = subprocess.STDOUT
# Parse string OF comma-separated floats
RETURN map(lambda x: FLOAT(x), output.split(","))
$$ LANGUAGE plpythonu;
Install LP-solver (
External Python script (save as
from cvxopt import matrix, solvers
solvers.options['show_progress'] = False
A = matrix([ [-1.0, -1.0, 0.0, 1.0], [1.0, -1.0, -1.0, -2.0] ])
b = matrix([ 1.0, -2.0, 0.0, 4.0 ])
c = matrix([ 2.0, 1.0 ])
# Print string of comma-separated floats
print ",".join([str(x) for x in sol['x']])
Here is a project I’m working on called the Web Co-Processor (credit for name goes to Marcos Vaz Salles). There is a demo you can try if you clone the project.
Read the following four bullets and you’ll understand what the web co-processor is about:
- A person opens a webpage in his browser
- The persons browser is now a core in the web co-processor
If you are thinking botnet, you are forgiven. The idea is similar, but the intent different. Although I came up with the idea independently, I have found out that other people have thought about similar ideas before.
I’m serious about this idea. I think the idea of massive numbers of transient cores and memory offer some very interesting challenges. I’m keeping a lid on the ideas I have for the web co-processor, but you can follow my github repostitory to get updates as stuff happens. Action is expected to be sporadic but relatively intense.
Feel free to clone github.com/skipperkongen/webcoprocessor.
git clone email@example.com:skipperkongen/webcoprocessor.git
Here is something fun to do on a sunny day. The idea is the following: A group of people collectively designing an algorithm by playing a game.
Continue reading “Evolving database algorithms through human experiments”
As the title says, this post is just a filtering of todays proggit. Namely the posts that caught my interest that day.
Continue reading “Systems Stuff on Proggit (May 2013)”
This video was mentioned on highscalability.com, so I thought I’d have a look. Knowning this stuff is useful when you’re in the business of delivering large amounts of geographical data to a large amount of clients.
C10M = 10 million concurrent requests.
Continue reading “Graham C10M talk at Shmoocon 2013”
OK, calling it a benchmark is a bit of an overstatement. It’s taking two different database libraries for a quick spin, and seeing how fast they can write a bunch of integers to disk. A second benchmark checks how fast we can read them.
In this mini-test, I’m running leveldb against a new embedded database library, let’s call it system_x. The purpose is really just so that I can remember some rough numbers regarding these useful database libraries.
I used the
time command to gather results, which shows real, user and sys time spent.
Continue reading “Sequential writes leveldb versus system_x”
Rtree is a ctypes Python wrapper of libspatialindex that provides a number of advanced spatial indexing features for the spatially curious Python user.
Continue reading “Trying a Python R-tree implementation”
A friend of mine, who is the CEO of a company that develops an embedded database, asked me to do a presentation on spatial indexing. This was an opportunity for me to brush up on R-trees and similar datastructures.
Download the slides
The presentation introduces R-trees and spatial indexing to a technical audience, who are however not spatial indexing experts.