In case you are using XMonad with Unity, you might get annoyed by the wasted vertical screen estate due to the Unity panel. Especially, since it is not possible anymore to configure it for auto hide.
However, with XMonad’s XMonad.Hooks.ManageDocks it is possible to circumvent this issue. In my case, I have Unity panel on the top of the screen and xmobar on the bottom, and I would like to hide the Unity panel by default.
We first tell ManageDocks to always show the dock at the Down/bottom (D). The following line is an extension of the defined layouts.
Furthermore, I would like to toggle the visibility of the Unity panel by
M-b toggles the visibility of all docks, e.g., xmobar and Unity panel in my case. The following lines extend my keys configuration in XMonad.
Now I can enjoy a mostly hidden Unity panel that can be shown on demand.
Amazon recently announced that it is possible to host a static website completely from S3. I am using Jekyll in combination with git to deploy my website, and I was curious how to deploy this combination together with S3 static website hosting. In this blog post I will describe the necessary steps to achieve this.
I tried this setup on Ubuntu 10.10 64bit, however I suppose it also works equally well on other distributions. I also assume you already have Jekyll (with git as the deployment method) and s3fs installed. Furthermore, the boto Python library for accessing AWS is initially required.
You can follow this guide to setup a S3 bucket for serving a static website. Let’s say the name of the bucket is ‘mybucket’ for the rest of this setup guide.
Amazon’s Identity and Access Management (IAM) is useful to restrict the access for the website deployment script to the specific bucket holding the website. I used the following Python script to setup the user policy:
Basically, the user jekyll will only have access to the bucket mybucket and it’s objects. However, in order to work with s3fs, the user also requires to issue the ListAllMyBuckets call. The script will print out the access and secret keys used for the deployment.
The credentials for s3fs are placed into
~/.passwd-s3fs. We then try to mount the bucket using:
s3fs mybucket /home/user/s3w3 -o use_rrs=1 -o default_acl=public-read
The website will be stored on a reduced redundancy storage (RSS) and with the default access control that everybody can read the objects.
We have to unmount the S3 bucket before proceeding to the actual deployment:
fusermount -u /home/user/s3w3.
Now everything is in place and we can use the following git post-receive hook script to do the actual deployment to the S3 website bucket.
We can test the script by pushing to the git repository and check the website at the S3 website endpoint. Have fun!
Since playing with Xen’s stub domains is fun, I dug out an old project that also uses these stub domains: a fuzzer for Xen hypercalls.
Hypercalls are similar to system calls in common operating systems, and they allow a VM to issue some privilege operation using the hypervisor. Fuzzing system calls is quite a popular sport, so it is interesting how this performs for hypervisors.
I published a Proof of Concept Fuzzer, which is not very sophisticated, however it shows how one can build a stub domain that fuzzes the different hypercalls. The hypercalls are divided in five groups, based on the number of arguments they expect. For each argument, we randomly select a value from one of these categories:
Some hypercalls are currently disabled, because they interrupt the fuzzing process. Further investigations are needed there.
Have fun extending the fuzzer and potentially find some interesting bugs.
In this post we will investigate how Redis, a popular key-value storage, can be run natively on Xen, i.e., without the support of a conventional operating system such as Linux, and what implication this has on the performance.
In the recent months there was a lot of buzz about the increasing complexity and the amount of abstraction layers in virtualized environments. A post at HighScalability.com touches on this issue and highlights the performance tax of these layers. It is necessary to reconsider current architectures and to simplify the complex layering.
Two recently announced projects are focusing on providing runtime environments that run barebone on Xen for Haskell and OCaml: Mirage for OCaml and HaLVM for Haskell. This will remove the conventional operating system, which typically hosts these runtime environments, and can potentially improve the efficiency.
However, these approaches require that the software is either written in OCaml or Haskell, and the majority of software written in C still requires a conventional operating system layer. A solution for this problem was presented in a paper about HPC using lightweight Xen VMs, where software written in C is build using a special toolchain for Xen that results in a Xen VM image rather than a executable binary for conventional operating systems. Unfortunately, no benchmarks or concrete implementations are presented.
In virtualized environments, it is practical that services running in virtual machines will experience a significant performance improvement when removing the conventional operating system layer and replacing it with a vastly simplified one.
Redis is a popular in-memory key-value storage system written in C. There are a few articles describing the architecture and background of Redis, which we will just refer to: Redis, from the Ground Up, Redis: under the hood.
We picked Redis as an example, because of several reasons:
Overall the simplicity was the major advantage to take Redis as a proof of concept.
Xen Mini-OS started as a small kernel example to demonstrate to developers how to port their kernels to Xen (for paravirtualization). More features got added over time (cf. Xen 3.3 Feature: Stub Domains) such as a C library, TCP/IP stack, and POSIX environment, and other application scenarios for Mini-OS were discovered, e.g., PVGrub is based on Mini-OS. Nowadays, stub domains are small Xen domains based on Mini-OS and the subsequently added features that are tightly integrated into the Xen build system (cf.
xen-unstable.hg/stubdom). “Hello World” stub domains written in C and OCaml can be used as a basis to develop own stub domains.
The implementation for running Redis as a Xen stub domain is basically using the following steps:
Steps 1) and 2) are straight forward and just required some Makefile modifications for the 2).
In order to make it compile, that is 3), we had to make some adaptions (read hacks) to the Mini-OS environment and Redis. For example, minor changes had to be done to the calls for randomness generation and process synchronization. We also wrapped the redis main function in the following code that fixes some problems with standard file descriptors and error handling.
Furthermore, for step 4), the usage of
fork() is crashing the stub domain due to missing support in Mini-OS, therefore we had to disable the database dump to hard disk and the virtual memory support (cf. Redis Virtual Memory).
The result is the following Xen VM image:
(It’s a proof of concept, so do not use it in any production environments and use it on your own risk)
Download the previously mentioned Xen image to your dom0 host system and store the following Xen VM configuration in
kernel = "/path/to/redis/mini-os.gz" name = "redis_minios" memory = 512 vif = ['ip="10.0.0.1"'] on_crash = "destroy"
Set an IP alias for the dom0 ethernet interface:
ifconfig eth0:0 10.0.0.2
Start the Redis Xen VM using:
xm create redis_minios.conf
And now you are able to connect to the redis instance running on 10.0.0.1.
Since our hypothesis is that by removing the conventional operating system layer we gain a significant performance improvement, we had to benchmark the resulting Mini-OS-based Redis version and compare it to a traditional Linux-based one.
Our test machine is a Debian Lenny x86-64 box running Xen version 3.2-1 with 2.2GHz Athlon 64 3700+ CPU and 2G memory. We have to VMs running Redis (Mini-OS and Linux) and run the redis-benchmark from dom0.
PING: 14087.32 requests per second PING (multi bulk): 13352.47 requests per second SET: 11682.24 requests per second GET: 13949.79 requests per second INCR: 13125.98 requests per second LPUSH: 13589.67 requests per second LPOP: 13333.33 requests per second SADD: 13175.23 requests per second SPOP: 12970.17 requests per second LPUSH (again, in order to bench LRANGE): 13123.36 requests per second LRANGE (first 100 elements): 9330.22 requests per second LRANGE (first 300 elements): 4894.76 requests per second LRANGE (first 450 elements): 3387.53 requests per second LRANGE (first 600 elements): 2905.29 requests per second
PING: 11264.04 requests per second PING (multi bulk): 11210.76 requests per second SET: 10857.92 requests per second GET: 11098.78 requests per second INCR: 10854.66 requests per second LPUSH: 10896.74 requests per second LPOP: 11100.55 requests per second SADD: 11080.84 requests per second SPOP: 11251.12 requests per second LPUSH (again, in order to bench LRANGE): 11061.95 requests per second LRANGE (first 100 elements): 8849.56 requests per second LRANGE (first 300 elements): 4944.28 requests per second LRANGE (first 450 elements): 4075.79 requests per second LRANGE (first 600 elements): 3558.16 requests per second
The most interesting operations for common applications are
GET. We get about a 11-13% performance improvement when using the Mini-OS-based Redis version for these two operations in comparison to a Linux-based one. It is not a significant improvement, but demonstrates the performance tax of the conventional operating system layer.
In the current proof of concept we have the following limitations:
In the next steps we have to do a more thorough benchmarking and profiling of the Mini-OS-based Redis, in order to determine bottlenecks in the current implementation. Furthermore, we need to get DHCP working that the image can be run in dynamic network environments. For example, we could create a Amazon Web Services EC2 image using Amazon’s use-your-own-kernel technology, which is, by the way, based on Mini-OS.
March 15-16, 2011 in Zurich, Switzerland
The aim of this workshop is to bring together researchers and practitioners working in cryptography and security, from academia and industry, who are interested in the security of current and future cloud computing technology. The workshop considers the viewpoint of cloud-service providers as well as the concerns of cloud users. The goal is to create a dialogue about common goals and to discuss solutions for security problems in cloud computing, with emphasis on cryptographic methods.
More information here.
Stipends for students are available.
I have uploaded my Master’s thesis with the title: Automated Security Analysis of Infrastructure Clouds
We also derived a paper from it which got accepted at the ACM Cloud Computing Security Workshop:
Cloud computing has gained remarkable popularity in the recent years by a wide spectrum of consumers, ranging from small start-ups to governments. However, its benefits in terms of flexibility, scalability, and low upfront investments, are shadowed by security challenges which inhibit its adoption. Managed through a web-services interface, users can configure highly flexible but complex cloud computing environments. Furthermore, users misconfiguring such cloud services poses a severe security risk that can lead to security incidents, e.g., erroneous exposure of services due to faulty network security configurations.
In this article we present a novel approach in the security assessment of the end-user configuration of multi-tier architectures deployed on infrastructure clouds such as Amazon EC2. In order to perform this assessment for the currently deployed configuration, we automated the process of extracting the configuration using the Amazon API. In the assessment we focused on the reachability and vulnerability of services in the virtual infrastructure, and presented a way for the visualization and automated analysis based on reachability and attack graphs. We proposed a query and policy language for the analysis which can be used to obtain insights into the configuration and to specify desired and undesired configurations. We have implemented the security assessment in a prototype and evaluated it for practical scenarios. Our approach effectively allows to remediate today’s security concerns through validation of configurations of complex cloud infrastructures.
I wrote up a short wishlist for AWS a while ago which also included Fine-Grained Access Control for the API. I am very excited that Amazon announced the AWS Identity and Access Management which tackles this problem. It is a preview beta and not yet fully integrated into the management console, but still a good move from Amazon from a security perspective.
For the curious: A dump of the XenStore config for a VM running on RackSpace Cloud. Actually quite a straight forward and simple setup: using image files for both root and swap disks, and a bridged network setup for the two interfaces (one for internal, one for external). Although the network is bridged, the VM is not able obtain traffic for other VMs as they are using VLANs for separation.
Shortly after publishing my notes on the EC2 architecture, I was looking into the networking setup of EC2 and in particular figuring out their address schemes. Since I am currently no longer interested in such information, I will publish my incomplete notes and the raw data gathered from about 80 instances in this post. My notes are based on information obtained from small instances in the us-east-1d zone.
I assumed the first hop in the traceroute from a VM is the actual dom0 IP address.
Consider the private IP addresses in the form
10.X.Y.Z. I have noticed that Y is partitioned into blocks containing a /24 for dom0 IP addresses, a /24 for VMs, and a /23 for another set of VMs. For example:
10.208.176/24 is the dom0 range;
10.208.177/24 the first VM range;
10.208.178/23 the second VM range.
Based on my data, the dom0 IP addresses always end in
.3, but there seems to be no pattern between a VM’s IP address and the ending of the corresponding dom0.
I do not have many information on this one. MAC addresses are typically in the form of
12:31:39:X:Y:Z, where X can be derived from the second octet of the private IP address. The following list gives the value of X for the second IP address octet. As an example: IP
10.210.X.Y leads to
12:31:39:09:X':Y', because 09 is listed for octet 210.
00 254 01 255 02 248 03 249 04 240 05 241 06 208 07 209 09 210 0A 211 0B 214 0C 215
The raw data can be found here. It contains network configuration information (ifconfig, traceroute, and routes) of about 80 instances from the us-east-1d zone. Let me know if you make any interesting discoveries based on that data.