These days the buzzword seems to be "hypervisor". It's a general term for what VMware is. One thing I've always found to be a good use case for virtualization is testing new and possibly broken configurations. These past few days I've been food poisoned and working from home recovering, but it did give me a chance to play around with the latest Linux virtualization: KVM.
KVM in this sense is not that hardware device to connect one monitor to multiple machines, so the choice of naming is unfortunate. It is based on QEMU, however there are kernel hooks to make things go faster by utilizing virtualization features on certain processors. What's great is recent distributions have it simply as a kernel module that doesn't require any kernel patching.
First some background. Why am I interested in this? Well in my case I run a website on Apache (who doesn't). I've been hearing some good things about lighttpd and I wanted to see what the fuss was about. However, I didn't want to muddy my current Apache server with lighttpd. I simply wanted another system to play around with, preferably one that didn't require hardware. So this seemed like a perfect case to try out the completely free KVM.
Now my host OS (the OS that will run KVM) is a Ubuntu 7.10 server. It actually runs on a closed laptop, so essentially I have no display on it and only connect to it remotely. Most virtualization software these days are GUI apps unless you buy the horribly expensive 'server' products. I was somewhat pleasantly surprised as to how KVM worked in my scenario. Ideally I wanted to run a guest OS the same as my host OS, Ubuntu 7.10, so that's what I started with.
First I installed the kvm package:
sudo aptitude install kvm
Easy enough, next I modprobe the right modules:
sudo modprobe kvm sudo modprobe kvm_intel
I should point out you need to have a processor that supports KVM. See their web page for more info. Now it get's a bit confusing. In some documentation there is mention of running "qemu-system-x86_64", however on my box this is the unaccelerated version that makes no use of KVM's kernel modules. I'm not sure what's going on here, but I believe at some point QEMU and KVM will merge code, and maybe this is the reason for the documentation discrepancy. Anyhow, for my case, I had to use "kvm" to start up my virtual machine. So I downloaded the Ubuntu 7.10 server ISO and begun my journey. First I needed to create a disk image. Interestingly, these disk images only take up space as you add to them, similar to VMware:
qemu-img create -f qcow vdisk.img 10G
This creates a 10G-maximum disk image. Now we're ready to begin the installation:
sudo kvm -hda vdisk.img \ -cdrom ubuntu-7.10-server-i386.iso \ -boot d -m 384 -vnc 192.168.1.137:0.0 \ -no-acpi
This simply boots the VM with the Ubuntu server CD and 384M of RAM. It's recommended to use -no-acpi, so I did. Now BAM I get an "exception 6" and a crash immediately (as described here). My foray is not starting off well . After more searching I came across this bug which hinted that this was maybe fixed in a newer version of Ubuntu. Huh? So something in Ubuntu is causing this crash? Searching further I found this thread which gives more info:
Confirmed here with kvm-intel and KVM 39. Invalid opcode (#UD) is probably caused by the boot spash code which may be using big real mode code.
So indeed, something in Ubuntu was doing it. So I decided to try grabbing the bleeding edge Ubuntu "Hardy Heron" server. After some thumb twiddling it now boots!
Now remember how I have no display on this box? Well the convenient -vnc argument creates a VNC server that I can connect to remotely! This is all great, but kvm also supports X, and actually I do most of my work from an OS X X terminal. Why not just use the X interface? Well on OS X, the keyboard becomes completely unusable in KVM for some reason. It could be something retarded in the X server. Anyhow, for VNC, there is Chicken of the VNC, by far the stupidest package name ever. It generally works well though, however when I tried to connect to my VM I was getting some invalid rectangle error. It seems everything was against my trying to get this working . After much experimentation, I found that I had to simply disable Hextile encoding in the VNC connection profile and voila, I can connect.
Now I thought to myself, I'm going to be running a webserver on this, don't I need some kind of network setup? Well once I confirmed things are booting ok, I did some research on KVM networking. Essentially what I wanted is my VM to appear just like a separate machine on the network with full network access. KVM has all sorts of networking possibilities, but here is my setup. First I updated /etc/network/interfaces to look like:
auto lo iface lo inet loopback auto br0 iface br0 inet dhcp bridge_ports eth0 bridge_maxwait 2 up /sbin/ifconfig eth0 inet 0.0.0.0 promisc auto eth0 iface eth0 inet static address 172.16.5.0 netmask 255.255.255.0
This generally came from this Ubuntu KVM doc. The IP above is actually a bogus one that won't be used. Rebooting (alas, I did have to reboot) gave me a br0 and eth0 device listed via 'ifconfig'.
My host machine still worked so that's good.
Now how to start the VM with networking? Simple:
sudo kvm -hda vdisk.img -m 384 \ -vnc 192.168.1.137:0.0 \ -no-acpi -net nic -net tap
I went through the install with no problems at all and it got an IP via DHCP on my router. Sweet.
Now one thing that was bothering me about all of this is that spiny sudo prefix. I would've liked to not use sudo. This forum thread mentions a possible solution but I had no luck with tunctl and I didn't feel like spending too much time on it.
So I got my cool VM working, now what? For what reason am I on this Earth? On my main web server I have a Django site running under Apache. What I wanted to try is running this with FastCGI under lighttpd. Now I have to admit, lighttpd configuration is a much nicer experience than Apache configuration.
Take a look at my lighttpd.conf. My virtualhost is defined at the bottom, and is taken mostly from the Django fastcgi docs. lighttpd has a very simple and elegant configuration. Note that I had to modify some of the modules loaded, and the Django docs seem to indicate the ordering is important. Once I started up my Django site in fcgi mode my site was instantly accessible, and FAST. At least, very fast for a virtual machine!
So what have I learned from all of this? Well KVM is cool, and one of these days it is going to beat out its commercial counterparts. Also, lighttpd is cool. Cooler than Apache I must say. It will definitely be coming to more web servers near you.
I have an old Thinkpad laptop. Since the new Ubuntu 7.04 came out, I wanted to try it out. The laptop is pretty much useless as a desktop with 192M RAM, but I've been hearing good things about the server version of Ubuntu and decided to give it a whirl. Now I have a laptop server that I use for simple things like an SSH gateway for me to connect externally.
Ubuntu server is pretty nice, but as with typical Debian distros, is rather bare bones. I had to do alot of package downloading to get to a useable system. What do I think of Ubuntu as a server? Well I'm used to Redhat, and I've griped about Debian in the past, namely update-rc.d. The same problems I mentioned in that blog in 2005 still exist today. Searching around I found that 'rcconf' is another useful tool. Why not install that by default then?
I also have major gripes with the dbconfig stuff. This is an effort to make database configuration package-friendly. It works, at least until you start removing packages and trying to reinstall them. I understand the reasoning for it, but it is just more pain than necessary. It's very easy to screw up the configuration completely.
Anyway, once I got things setup, I liked it. Next up was trying to mount some filesystems on my Mac Mini. I installed netatalk, but soon realized this was for the opposite: to share linux filesystems to a mac. This page indicates that the afpfs filesystem module, which I needed, is unmaintained. Next I found afpfs-ng, which is a fuse module. That sounds great and all, but no matter what I did I could not get it to authenticate properly with my mac.
I decided to ditch AFP and setup my mac to export shares via Samba. It's tried and true and mounts on Linux flawlessly. But I had a few shares to mount, all containing multimedia stuff. Now comes the cool part: unionfs.
On my mac I have 2 directories with movies, one on an internal drive and one on an external. I shared them out and mounted them on Linux, with something like this in /etc/fstab (using \ as line continuation below):
//192.168.1.100/valankar /mnt/mini smbfs \ credentials=/home/valankar/.smbcredentials, \ uid=valankar,gid=valankar,fmask=600,dmask=700 \ 0 0 //192.168.1.100/stuff /mnt/stuff smbfs \ credentials=/home/valankar/.smbcredentials, \ uid=valankar,gid=valankar,fmask=600,dmask=700 \ 0 0
The credentials file is so I don't have to include my password in the world-readable /etc/fstab. So this is great, but what I'd really like to do is combine /mnt/stuff and /mnt/mini in to one virtual directory. After installing the unionfs-tools package, I added to fstab:
unionfs /mnt/movies unionfs \ dirs=/mnt/mini/Downloads:/mnt/stuff/Movies \ 0 0
Now when I go to /mnt/movies and do 'ls', I see a combination of both directories. This is just so cool. When I write to the directory, the 1st real directory gets priority, but I thought, this would actually be a very cool filesystem if the writes were put on the drive with the most free space. There would need to be some coordination when making directories, but this could essentially let me mount many different types of drives and even disparate filesystems, and then combine them into one writable virtual filesystem.
Turns out there is one such experimental filesystem called SwitchFS. I'm not sure how active this project is but I might give it a whirl. I've been itching for some open source stuff to work on.
This weekend I decided to take the adventure of installing Ubuntu 6.06 (Dapper Drake) on my Powerbook G4. I wasn't sure what to expect as to hardware support, and learned alot along the way. I am writing this from Firefox in Linux on my Powerbook though, so it's been somewhat successful .
I've been wanting to have a good Linux box to hack on. I did install Xubuntu on an old Thinkpad, but it's falling apart and the battery lasts a whopping 10 minutes. I've heard some good things about Ubuntu on PowerPC, so decided to give it a go.
I have OS X Panther on my Powerbook. I have never had the chance to upgrade to Tiger. I have a 70G drive and had plenty of free space, so my first task was to resize and repartition. Now I'm pretty familiar with doing this on the PC, but not on Macs. I have been using SuperDuper for backups to an external USB drive, so I was set for backups. I actually wouldn't mind it if my Panther partition got trashed, as it would give me an excuse to upgrade to Tiger. In the end nothing bad happened, so I was still luckily (or unluckily) stuck with Panther.
Back to resizing. Out of whim, I tried booting the Ubuntu CD to see if it had an easy resizing tool like it did for Intels. Nope, it just wanted to erase my whole disk. I read that it is possible to do resizing with free tools, but it was late and I didn't feel that brave. I decided to purchase and download iPartition, which seemed similar to Partition Magic. Of course, it couldn't resize the live running boot partition, so I needed to boot something else. The manual first recommends trying to boot off of a backup drive, and if that doesn't work to use the boot CD creator tool that comes with iPartition.
I thought this would be a good test of by 'backup strategy.' I plugged in my external USB drive. After some digging around, I found out that you can have the Mac scan for bootable media at powerup by holding down the Option key while powering on. This brings up a nice GUI boot screen and showed my USB drive! I selected it, and after some crunching it eventually came up with a no smoking sign and didn't boot . Oh well I guess my backup isn't that cool. It does have files on it though, so I am backing up something, just not something that will boot. I tried some other hackery mentioned on the web, by booting into the Open Firmware by holding down the twister-inspired key combination of Command-Option-O-F while powering up, and changing the boot-device with setenv. I swear, Apple is trying to give me arthritis. Anyway, no luck with that, and I later realized that setenv actually writes the NVRAM, so I had to remember the old setting and undo this. Fuck booting from my 'backup.'
Next I created an iPartition boot CD with their CD creator tool. I booted it (again with the Option key held at boot to select CD), and successfully shrunk my OS X partition by 10G and left the free space. This operation was done so fast I thought it didn't do anything. I booted back into OS X. It booted ok (whew), and I verified in Disk Utility that the partition shrunk by 10G. So far so good.
Now on to installing Ubuntu. I booted the CD. I wanted to see if the Airport wireless worked, so went to System -> Administration -> Networking. It detected the Airport but failed to configure it, and I remember seeing boot messages about firmware errors. I did some research beforehand and it turns out the driver is not open, and it was up in the air whether it would work or not. I figured I could fool with it after installation. I grabbed a wireless card from my Thinkpad, and voila that worked like a charm in my Mac. At least I would have internet access while I debugged the Airport driver later.
I ran the installer, and specified for it to use my free space. The text that is shown before making the partition gave the impression it was going to wipe my HD, i.e. it was not obvious it would use my free space. I thought whatever, if it trashes it that's ok. It didn't (whew), and after a 40 minutes or so, I rebooted. I was given a LILO-like boot menu with my OS X still there. I tested OS X, and that still worked (whew). Next I rebooted into Linux! It was speedy and worked like a charm. My wireless was working, but with the PC card, not with the internal Airport.
I started searching the discussion boards for Airport support. Now Ubuntu forums are interesting. They are mostly filled with non-Linux users seeking help. That's cool and all, but it just makes my searching more difficult with crappy results. I finally found this guide which was generally what I did except for the part about installing 'Network Manager', whatever that was. I figure I should be able to use the Network setting app that is already part of Ubuntu. I found some other good docs, the latter being very helpful. Eventually I got it working whenever I brought up the interface in the Networing app, but it would take a very long time, and would never work at bootup. From those docs, I added the following to /etc/network/interfaces:
auto eth1 iface eth1 inet dhcp pre-up ifconfig eth1 up pre-up iwconfig eth1 rate 11M pre-up iwconfig eth1 ap any wireless-essid myssid
I rebooted and voila, Airport works! No more PC card needed. Everything, even sound, was working. But after using Ubuntu for some time on the Mac, I realized a big annoyance...
I realized early that Ubuntu is not very usable with a 1-button mousepad. After some searching I found out that F12 (or maybe it was F11, it's not working now) was mapped to right-click. WTF, there is no way in hell I'm using any F key for right-clicking. In OS X I can do it by ctrl-clicking, so I should be able to do it in Linux too. I came across this posting about installing mouseemu. I did so, and added to /etc/default/mouseemu:
MID_CLICK="-middle 125 272" # Command key + mouse click RIGHT_CLICK="-right 29 272" # Control key + mouse click
Once I did that and restarted mouseemu (sudo /etc/init.d/mouseemu stop/start), I was able to right-click with ctrl-click. Yay! But then after realizing that I have to use Alt-Tab instead of Command-Tab to switch windows, I was annoyed further because the Alt key on the Mac is proven to increase the risk of arthritis and why the hell do I have to remember a different keystroke for Linux when Command-Tab works in OS X?
I found somewhere on the web that xmodmap can be used for this. I created a ~/.xmodmap file that contained:
keycode 115 = Alt_L
One thing very cool is Ubuntu will notice your .xmodmap and ask you to load it on next startup. There was some discussion on whether to use .xmodmaprc, or .Xmodmaprc, or .Xmodmap, or .muhahhayouWillneverFigureitOutrc. But I will tell you .xmodmap works .
After booting back and forth between OS X and Linux, I realized that the boot manager was defaulting to Linux. I didn't want that. I wanted OS X by default. Such began my journey into yaboot, the LILO for Mac's.
It was a short journey. I edited /etc/yaboot.conf and added after the macosx= line:
This is described in 'man yaboot.conf'. I rebooted, but whaddaya know, it still booted into Linux by default. I was optimistically thinking this would be like GRUB, where I didn't have to run anything to update the boot sector. Turns out I need to run ybin, which copied my changes in. I didn't know what arguments to give, so being inspired by 'lilo', I just ran 'ybin.' That worked, and I was booting OS X by default. Yay.
Now I worked a bit more in Ubuntu and realized that I needed to copy some files off of my OS X partition (namely, my SSH config file).
I had no idea if OS X's HFS+ partition was supported in Linux. I found this document which states that it is, and the fs type is hfsplus. However, 'man mount' only shows hfs, not hfsplus. I stuck with hfsplus anyway. I found this post about a user mounting his partitions. All I wanted to do was copy a file, and I'd be pretty pissed if Linux screwed up my OS X partition accidentally. I decided to mount it read-only. But what's my OS X partition? I easily found that from the macosx= line in /etc/yaboot.conf, which was /fev/hda3. dev has been replaced with fev to protect the innocent. For some reason either my webhosting provider or blogging software won't let me post dev. I ran the following commands:
mount -o ro -t hfsplus /fev/fda3 /tmp/mnt
cp my file
At this point there are only a few things left that annoy me. The mouse sensitivity seems quite different from OS X. I tried adjusting the acceleration settings, but it still just seems wierd. My fan seems to be constantly running, and my guess is Linux is not as nice on the CPU as OS X. Finally, cutting and pasting with a 1-button mouse in Linux is a bitch. Why do I have to shift-ctrl-anything to copy text in Terminal anyway? Why can't I just select and have it auto-copy? Maybe there is a way, but I haven't figured it out yet. Maybe it is time for me to get a real mouse and keyboard.
The fn, ctrl, alt, and command keys on the Mac cause me no end of grief. I constantly forget which incantation to use to switch workspaces, switching firefox tabs, closing windows, closing firefox tabs, etc. But that's more of a rant towards the Powerbook keyboard in general. Also forget about Flash and other plugins, they are nonexistent for PowerPC Linux.
In general, it was a fun adventure to figure out how to do things and cool to have Linux on my Powerbook. It runs beautifully and I'm sure I'll be hacking on it for some time.
I have this pretty old IBM Thinkpad iSeries Type 1161-260 with a Celery processor. It's been sitting in my closet for about 6 months. I had attempted to run Linux on it at one point, but the Cardbus pcmcia interface on this laptop did not have good support so essentially no pc card worked. I spent alot of time hacking at it and eventually I gave up and left Windows XP on the system.
This weekend I thought maybe Linux these days has better support. It's a low end laptop with 192MB RAM. I wanted to run Ubuntu, but needed a lightweight version. That's when I found Xubuntu, a lean Ubuntu. I had about 3G free on my windows partition (a whopping 5G drive). I guessed correctly that Ubuntu must have a partition resizer that works well.
I downloaded the Xubuntu ISO on my Mac, burned it, and booted my Thinkpad. After some crunching, it came up to X with a live-CD like distro and an 'Install' icon on the desktop. Cool. I brought up a terminal and did an 'ifconfig -a' expecting to see only my loopback device. Lo and behold, I see an eth0. I think to myself, this can't be my wireless card. I then went to the graphical network configurator and saw that it detected my wireless card. A few clicks and I saw the open wireless networks (I keep mine open too . I was able to join, startup Firefox, and browse. My jaw dropped. Ubuntu out of the box on its installation live-CD has support for my wireless card that I could never get working for the life of me? I was very impressed. I had to start the installation.
After a long time I eventually had Xubuntu installed. I rebooted and found out that now my wireless card wasn't detected. Poking in /var/log/messages I saw:
cs: warning: no high memory space available!
Now this shit looks familiar, the same crap I had ages ago that I couldn't fix. So continued my hacking. I thought maybe some module loaded at full install and not in the live-CD was conflicting, maybe USB. So I sought to disable USB. Now times have changed and this is not as easy as just commenting it out in /etc/mod* files. I found /etc/modprobe.d/aliases and began fiddling with it. But for the life of me I could not get USB disabled. Even when I removed the kernel module file it would still get loaded!
This really baffled me. I thought maybe USB is compiled into the kernel. But that wasn't the case because I see it in a 'lsmod'. After much searching I realized that now distros use an initramfs, which is like an initrd but I believe you can store more. I looked at the current initramfs used during boot (via /boot/grub/menu.lst), which is a fun gzipped cpio archive:
gzip -dc /boot/initrd.img-2.6.15-26-386 | cpio -i
And whaddaya know, the usb kernel modules are stored in there. I found out about mkinitramfs (like mkinitrd), made a new one without usb, rebooted. But still my fucking wireless card didn't work. I went to sleep and decided to try again later. I was determined though because the damn thing worked on the live-CD, I have to be able to get it working installed! That would be really lame if I gave up now.
I spent some time searching, and found some references about pcmciautils replacing pcmcia-cs, and /etc/pcmcia being moved to /usr/share/pcmciautils. On my system /etc/pcmcia was empty. I booted the live-CD again and noticed that /etc/pcmcia had lots of config files, importantly a config.opts. On my installed system, this file was in /usr/share/pcmciautils. It contains info on memory regions to probe.
Then I found this German posting that seemed to be recommending copying config.opts to /etc/pcmcia. My 2 semesters of German came through, and I knew that 'alles wunderbar' meant something good. I copied the file, rebooted. Voila, my wireless card worked!
What a pain and such a simple fix! It's really lame this file is not automatically copied there, but I'm glad to have gotten it working. Where there is a will there is a way.
Awhile ago I had setup a Ubuntu Linux system for my mom to use. There were some issues with sound not working that I never really looked into. This morning I decided to look more into it.
Basically, the problem was that sound was not working. Looking at an 'lsmod' showed a snd-cmipci module loaded which I found out was the driver for my card (Crystal Media 8738). So I started checking log messages for any device not found messages but didn't find any. Sound applications, like Real Player, would just hang when they started. Doing an strace on them revealed them trying to open /dev/audio, /dev/dsp, or /dev/snd/pcmSomeHexCrap and just locking on that. Then I tried something simple:
echo a > /dev/audio
This should return immediately with some garbage sound sent to the device. But it didn't, and just hung.I thought it might be an IRQ or some sort of hardware conflict. I found on google mention of upgrading to a new ALSA driver (ALSA is what Ubuntu uses). So I downloaded the Alsa source, which then wanted a configured Linux kernel source. Ubuntu does not include this, and I didn't want to be fucking recompiling the kernel for sound.
So I said forget upgrading, I'll see if it's another problem. Just by chance, I SSH'ed remotely into the box and didn't login on the console. I tried the same echo command above and it worked. Then I tried aplay to play a sound and it worked. At this point sound was working when I didn't login to X. So some sound daemon X is starting is screwing things up. That narrowed it down.
I logged in on X and started looking at the lsof output of daemons that sound like they have something to do with sound . I found esd had /dev/dsp open, and a 'killall esd' later I was able to play sounds just fine. I ended up disabling the 'sound server' via System-Preferences-Sound as explained here, which said goodbye to esd. Real Player was happy now.
A Google search comes up with mention of using '-as 2' to esd which might help. I honestly don't give a shit. This just reinforces to me the pathetic state of sound on Linux.
So a Windows PC my mom was using got hosed and it seemed nontrivial to fix it. I said what the hell and installed Ubuntu Linux. I just showed her how to login and start a browser (that's all she normally does). I think something like this is the best way to determine usability of a distribution.
The issues that came up so far is java, flash, and real audio not working. Of course no distro I've ever used actually includes this stuff. Is it really just licensing issues? Luckily Ubuntu has a wiki page describing how to install these. It was relatively simple, but could've been simpler.
The other problem is sound simply doesn't work. I found that out when Real Player wouldn't start and the process would just hang. I did an strace and saw it was trying to write to /dev/dsp and hanging. I didn't have a chance to look into this much, so whatever, no sound for now.
I think it's kinda lame that for someone to use Linux they still need Linux expertise to install plugins, setup apt repositories, and other crap that only a long-time Linux user can grasp. And forget about setting up a printer. It's almost like all distros are kept elite for the job security of Linux sysadmins. I would really like to see a day when a computer illiterate can use Linux without relying on a friend that knows Linux.
In general the system seems to be working and I haven't got too many complaints. We'll see how long that lasts, and when the Windows withdrawal symptoms set in.
:: Next Page >>
Donate to keep this site going!
| Next >
|<< <||> >>|