Get news? 2011 | 2010 | 2009 | 2008 | 2007 | 2006 | 2005 | 2004 | 2003 | 2002 | 2001 | 2000 | 1999 | 1998 | 1997 | About | Contact Want to help?

Linux-Kongress 2006
13th International Linux System Technology Conference
September 5-8, 2006
Georg-Simon-Ohm-Fachhochschule, Nürnberg, Germany

Home | Program | Abstracts | Fees | Registration | Location & Accomodation | Keysigning Party | Sponsors | Call for Papers

Abstracts

SELinux
by Ralf Spenneberg
Tuesday, 2006-09-05 10:00-18:00 and
Wednesday, 2006-09-06 10:00-18:00

Linux is not very secure by design. Simple access controls protect the system from attacks. Mandatory Access Control (MAC) is the current solution to this problem. Up to now only two major distributions support MAC: Red Hat/Fedora uses SELinux and SUSE uses AppArmor. But the next version of Debian will also use SELinux to enhance its security. This tutorial shows you how to use and tune a given SELinux Policy. Moreover you will be able to modify the policy for your needs and develop new policies for yet unsupported applications.

About the speaker:

The Author has used Linux since 1992 and worked as a system administrator since 1994. During this time he worked on numerous Windows, Linux and UNIX systems. The last 5 years he has been working as a freelancer in the Linux/UNIX field. Most of the time he provides Linux/UNIX training. His specialty is network administration and security (firewalling, VPNs, intrusion detection). He has developed several training classes used by Red Hat and and other IT training companies in Germany. He has spoken on several SANS conferences and even more UNIX/Linux specific conferences. He as written several german books on Linux security.

Netzwerküberwachung mit Open-Source Tools
by Thomas Fritzinger, Jens Link and Christoph Wegener
Tuesday, 2006-09-05 10:00-18:00 and
Wednesday, 2006-09-06 10:00-18:00

Durch die wachsende Abhängigkeit unseres täglichen Lebens von einer funktionierenden IT-Landschaft und die gleichzeitig rapide zunehmende Komplexität der dazu benötigten Infrastrukturen, gewinnen die Themen Netzwerkmanagement und Netzwerküberwachung immer mehr an Bedeutung. Zur Netzwerküberwachung existiert eine Reihe von komplexen und oft sehr teuren kommerziellen Werkzeugen. Dieser Workshop zeigt, wie man eine analoge Funktionalität mit spezialisierten, freien und quelloffenen Programmen erreichen kann.

Themen im Detail/Ablauf des Tutoriums:

  • Organsisatorische Fragen
    • Möglichkeiten der Netzwerküberwachung
    • Business Planing / Business Continuity / TCO
    • Warum freie und quelloffene Software?
    • Bedeutung der Netzwerküberwachung beim Risikomanagement im Rahmen von Basel II und des Sarbanes-Oxley Acts (SOX)
  • Rechtliche Aspekte
  • Simple Network Management Protocol (SNMP)
  • Qualitative Überwachung
    • Multi Router Traffic Grapher (MRTG)
  • Verfügbarkeitsüberwachung
    • Nagios
  • Proaktive Überwachung, Auswerten von Logdateien
  • Fehlersuche in Netzwerken mit Ethereal
  • Sicherheits-Monitoring
    • nmap
    • Nessus und neue Open-Source Alternativen
    • Snort

Die Inhalte werden im Vortragsstil vermittelt und durch praktische Übungen durch die Teilnehmer am eigenen Rechner vertieft. Ergänzend steht das Vortragsscript als Folienkopien mit der Möglichkeit für Notizen bereit.

Zielgruppe/Voraussetzungen:

Das zweitägige Tutorium richtet sich an erfahrene Systemadministratoren, deren Aufgabe die Betreuung, Überwachung und Optimierung von komplexen Netzwerkumgebungen ist. Die Teilnehmer sollten bereits Erfahrungen mit der Installation von Programmen unter Linux haben und rudimentäre Grundkenntnisse des TCP/IP-Stacks mitbringen.

Die Teilnehmer müssen einen Rechner mit einer aktuellen Linux-Distribution mitbringen. Hinweis: Benutzer anderer Betriebssysteme (*BSD oder MacOS) sollten sich vor der Veranstaltung mit den Vortragenden in Verbindung setzen.

Im Laufe des Workshops wird der Aufbau eines Überwachungsservers auf Basis von Linux mit exemplarischen Diensten gezeigt und diskutiert werden. Dabei werden aber nicht nur die rein technischen Aspekte der Netzwerküberwachung beleuchtet, sondern auch die Grundlagen der notwendigen organisatorischen und rechtlichen Rahmenbedingungen aufzeigen und berücksichtigt. Nach der Veranstaltung können die Teilnehmer die gewonnenen Erkenntnisse dann selbständig in die Praxis umsetzen.

About the speakers:

Thomas Fritzinger ist ausgebildeter Fachinformatiker für Systemintegration. Seit 2002 ist er ebenfalls für iAS tätig und leitet dort die Abteilung für Networking Development.

Jens Link ist seit Jahren als Netzwerk-/Sysadmin tätig. In dieser Zeit musste er sich immer wieder mit den verschiedensten Netzwerkproblemen (auf allen zehn Ebenen des OSI-Modells) auseinandersetzen.

Christoph Wegener (http://www.wecon.net/)ist promovierter Physiker und Leiter des Bereichs Business Development bei der gits AG; außerdem ist er seit vielen Jahren freier Berater in den Bereichen Linux und IT-Sicherheit. Er ist Mitbegründer der "Arbeitsgruppe Identitätsschutz im Internet (a-i3) e.V.".

Asterisk for the beginner
by Stefan Wintermeyer
Tuesday, 2006-09-05 10:00-18:00

For those who haven't installed and configured an Asterisk server yet.

Agenda:

  • Install Asterisk on a Knoppix or Ubuntu
  • Configure a basic System with 2 SIP Phones
  • Handling of incoming and outgoing calles over a SIP Gateway (e.g. sigate.de)
  • Voicemailsystem
  • Basics about variables and how to use them
  • Codec and protocol
About the speaker:

Stefan Wintermeyer is author of the Asterisk book by Addison Wesley. His company amooma offers special Asterisk trainings.

Recovering from Hard Drive Disasters
by Theodore Ts'o
Tuesday, 2006-09-05 10:00-18:00

Ever had a hard drive fail? Ever kick yourself because you didn't keep backups of critical files, or you discovered that your regularly nightly backup didn't? (Of course not, you keep regular backups and verify them frequently to make sure they are successful.) For those of you who don't, this tutorial will discuss ways of recovering from hardware or software disasters. Topics covered will include a basic introduction to how hard drives works, filesystems, logical volume managers, and software raid on Linux. Specific low-level techniques to prevent data loss will include recovering from a corrupted partition table, using e2image to back up critical ext2/3 filesystem metadata, using e2fsck and debugfs to sift through a corrupted filesystem, and finally some measures to avoid needing heroic measures to recover your data in the first place.

About the speaker:

Theodore Ts'o has been a C/Unix developer since 1987, and has been a Linux kernel developer since September 1991. He led the development of Kerberos V5 at MIT for seven years, and is the primary author and maintainer of the ext2/ext3 filesystem utilities. Theodore currently serves on the board of the Free Standards Group and contributes to the development of the Linux Standard Base. He currently is a Senior Technical Staff Member with the IBM Linux Technology Center.

Building and Maintaining RPM Packages
by Jos Vos
Tuesday, 2006-09-05 10:00-18:00

Introduction

In this tutorial attendees will learn how to create, modify and use RPM packages. The RPM Package Management system (RPM) is used for package management on most Linux distributions. It can also be used for package management on other UNIX systems and for packaging non-free (binary) software.

The tutorial will focus on creating RPM packages for Fedora and Red Hat Enterprise Linux systems, but the theory will also apply to package software for other distributions.

Contents

General software packaging theory will be provided as a start, followed by the history and basics of the RPM packaging system.

The headers and sections of an RPM spec file will be discussed. Hints and tricks will be given for each section to enhance the quality of the target package, including the use of macros, adapting software for installing it in an alternative root directory, ensuring correct file ownerships and attributes, the proper use of pre/post (un)installation and "trigger" scripts, and how to deal with package-specific users and init scripts.

Package dependencies and conflicts will be covered, as well as some ways too tweak the automatically generated dependencies, if needed.

Installing files in the proper place requires knowledge of the Filesystem Hierarchy Standard (FHS), hence the basics of the FHS will be discussed.

The tutorial will also show how to properly package binary software, often done for internal system management purposes, and shed light on some of the issues involved, including some legal aspects related to packaging non-free software.

Package repositories and dependency resolution. Complementary to RPM, software exists for solving dependencies, such as up2date, yum, and apt-rpm. This software and the corresponding package repositories will be discussed.

Using RPM on non-Linux systems. Although primarly used on Linux systems, RPM can also be used to package software for other (free or commercial) UNIX-like systems. Some aspects of using RPM on non-RPM systems will be discussed.

Besides the theory, several issues will be illustrated with live demonstrations.

Target audience

The tutorial is targeted toward system administrators and software developers that want to create or modify RPM packages or get a detailed insight in the way RPM packages are built and can best be used. The attendees need no prior knowledge of RPM, although some basic knowledge of using software packages (as a system administrator using RPM, apt/dpkg, etc.) would be helpful.

About the speaker:

Jos Vos is CEO and co-founder of X/OS Experts in Open Systems BV. He has 20+ years of experience in research, development and consulting -- mostly relating to UNIX systems software, Internet, and security.

His operating system of choice since 1994 is Linux. In the Linux community he is best known for writing ipfwadm and part of the firewall code in the 2.0 kernel. Using RPM since 1996, he is known to nearly never install software without "RPM-ifying" it. He also participated in the design of RPM's trigger-sripts, later implemented by Red Hat.

His company X/OS delivers open, standards-based solutions and services. Products include support services for X/OS Linux -- an enterprise-class Linux distribution, custom-built firewall/VPN appliances with embedded Linux and high-availability cluster solutions.

extreme hacking - How to find vulnerabilities in your own network / application
by Roland Wagner
Tuesday, 2006-09-05 10:00-18:00

Statistics say your network will be attacked once in five minutes. Statistics also say 70%-80% of the attacks came from inside your network. How can you be sure you fixed every vulnerabilities on every server and workstation in your network? How can you be sure that your firewall works as it should? How can you be sure that your application has no vulnerabilities?

In this tutorial we will start with some basic information about the different phases of penetration testing and hacking and how a real hacker would try to attack your network. In the practical part of this tutorial you will act as a hacker. You will try to find as much as possible information about the target and how to use this information to find vulnerabilities. You will be able to find information about the vulnerabilities and finally will find a solution for your security problem.

During the session there will be some computers with operating systems available as victims but we can also scan the network/server of the participants if it's allowed to do that. So if you want to find vulnerabilities in your own network ask the administrator / security engineer of your network for permission to scan your network.

Requirements: You should be ready to install some basic tools on your private laptop and have good knowledge about the TCP/IP-protocol. Knowledge about commonly used protocols (http, ftp, smtp, pop, snmp, etc.) would be fine.

Important: Participants have to provide their own Linux-laptop with working network interface (ethernet, no token ring, fddi, etc. please :-)

About the speaker:

Roland Wagner is a long time unix / linux user, started with minix and later on in 1993 with linux-kernel version 0.98. He is working at Datev eG since 1999 as an IT-security engineer. He holds degrees in “data processing technology” (“Datentechnik”) from the Georg-Simon- Ohm-Fachhochschule in Nürnberg (Dipl-Ing. FH) - yes it's a homematch - and computer science (Dipl-Inf Univ.) from the University of Erlangen-Nürnberg. Main points of interest are embedded devices, intrusion detection and intrusion prevention systems, computer forensics and penetration testing. He is occasionally instructor for computer beginners and was talking about “outsourcing of it-security incident handling” at the “IT-Incident Management & IT-Forensics (IMF 2003)”.

Asterisk for the geek
by Stefan Wintermeyer
Wednesday, 2006-09-06 10:00-18:00

For those who know how to install and setup a basic Asterisk server but who want to do some nifty stuff with it.

Agenda:

  • Short summery of the very basics
  • IVR
  • Variables and expressions
  • Programming in the extensions.conf
  • Programming with AGI
  • Meetingrooms (ConfCalls)
  • Queues
  • CallFiles
  • Misc
About the speaker:

Stefan Wintermeyer is author of the Asterisk book by Addison Wesley. His company amooma offers special Asterisk trainings.

Inside the Linux Kernel
by Theodore Ts'o
Wednesday, 2006-09-06 10:00-18:00

Topics include:

  • How the kernel is organized (scheduler, virtual memory system, filesystem layers, device driver layers, networking stacks)
    • The interface between each module and the rest of the kernel
    • Kernel support functions and algorithms used by each module
    • How modules provide for multiple implementations of similar functionality
  • Ground rules of kernel programming (races, deadlock conditions)
  • Implementation and properties of the most important algorithms
    • Portability
    • Performance
    • Functionality
  • Comparison between Linux and UNIX kernels, with emphasis on differences in algorithms
  • Details of the Linux scheduler
    • Its VM system
    • The ext2fs filesystem
  • The requirements for portability between architectures
About the speaker:

Theodore Ts'o has been a C/Unix developer since 1987, and has been a Linux kernel developer since September 1991. He led the development of Kerberos V5 at MIT for seven years, and is the primary author and maintainer of the ext2/ext3 filesystem utilities. Theodore currently serves on the board of the Free Standards Group and contributes to the development of the Linux Standard Base. He currently is a Senior Technical Staff Member with the IBM Linux Technology Center.

Using Xen to partition your system
by Kurt Garloff
Wednesday, 2006-09-06 10:00-18:00

Xen3 was released a little less than one year ago and it has found its way into Linux distributions since then. Virtualization is slowly becoming a mainstream technology. With multicore CPUs, larger machines become more common and offer the resources to host many services on one physical machine. Xen offers a way to partition the system into virtual machines that can be relocated to other physical machines as required.

The tutorial quickly covers some theoretical background on Xen and then moves on to utilize it. It explains how to plan for networking using various setups with bridging as well as other possibilities. It covers the various possibilities how to provide storage (a virtual disk) to the physical machines and discusses options. The setups that allow relocation of physical machines are treated in detail.

In the next section, we'll go through the process of building a virtual machine around a group of processes. This way we go a step further than server consolidation targets. We don't only move apps from a real physical machine into a virtual machine, but we continue partitioning the apps further into virtual machines, until we end up with a system of many small domains.

For the tutorial, it would be useful if the attendees bring laptops and have xen running already. We will provide virtual machine images.

About the speaker:

Kurt Garloff started hacking the Linux kernel when he tried to get his machine to reliably support an AM53C974 based SCSI adapter back in 1996 with limited success. He has since been involved in various open-source projects, mostly kernel related, but also compiler and security related topics. He works for Novell, tried to make SUSE Labs work well and now serves as the leader of the architects team. In his spare time, he works on Xen and creates Xen packages for the SUSE users.

Configuring and Deploying Linux-HA
by Alan Robertson
Wednesday, 2006-09-06 10:00-18:00

Intended Audience:

System Administrators and IT architects who architect, evaluate, install, or manage critical computing systems. It is suggested that participants have basic familiarity with System V/LSB style startup scripts, shell scripting and XML. Familiarity with high availability concepts is not assumed. This tutorial is intended to provide participants with both basic theory of high-availability systems and practical knowledge of how to plan for and install and configure highly-available systems using Linux-HA..

Description:

The Linux-HA project is the oldest and most powerful open source high-availability (HA) package available - comparing favorably to well-known commercial HA packages. Although the project is called Linux-HA (or "heartbeat"), it runs on a variety of POSIX-like systems including FreeBSD Solaris, and OS/X.

Linux-HA provides highly available services on clusters from one to more than 16 nodes with no single point of failure. These services and the servers they run on are monitored. If a service should fail to operate correctly, or a server should fail, the affected services will be quickly restarted or migrated to another server, dramatically improving service availability.

Linux-HA supports for rules for expressing dependencies between services, and powerful rules for locating services in the cluster. Because these services are derived from init service scripts, they are familiar to system administrators and easy to configure and manage.

This tutorial will cover planning, installing, and configuring Linux-HA clusters. Topics covered will include:

  • General HA principles
  • Compilation and installation of the Linux-HA ("heartbeat") software
  • Overview of Linux-HA configuration
  • Overview of commonly used resource agents
    • Managing services supplied with init(8) scripts
  • Sample Linux-HA configurations for Apache, NFS, DHCP, DNS and Samba
  • Writing and testing resource agents conforming to the Open Cluster Framework (OCF) specification
  • Creating detailed resource dependencies
  • Creating co-location constraints
  • Writing resource location constraints
  • Causing failovers on user-defined conditions
About the speaker:

Alan Robertson founded the High-Availability Linux [Linux-HA] project in 1998, and has been project leader for it since then. He worked for SuSE for a year, then joined IBM's Linux Technology Center in March 2001 where he works on it full time.

Before joining SuSE, he was a Distinguished Member of Technical Staff at Bell Labs. He worked for Bell Labs 21 years in a variety of roles. These included providing leading-edge computing support, writing software tools and developing voice mail systems.

Alan is a frequent speaker at a variety of international open source and Linux conferences.

Collaboration, Community and Future Technology
by Alan Cox
Thursday, 2006-09-07 10:15-11:00

The simple public sharing of information and source code has evolved both a cultural and legal/contractual basis that has produced Free Software, Open Source and has fed strongly into ideas like Wikipedia and the Creative Commons. Technology moves ever onwards and it is becoming more and more practical to share and evolve the designs for 2D and 3D objects, and soon it is likely to be practical and viable for people to "print" their own 3D goods. What common trends are found in the development of existing collaborationcultures and what can be predicted for the future?

About the speaker: T.B.D.
Linux as a Hypervisor
by Jeff Dike
Thursday, 2006-09-07 11:30-12:15

Currently, high-performance virtualization technologies employ either a specialized hypervisor (i.e. VMWare and Xen) or unmerged (and unmergeable in their current state) kernel patches (OpenVZ and vserver). It would be desirable to use Linux as the host OS, effectively making it a hypervisor. This would avoid adding another OS, with its own set of tools and management problems, to the workload of the host sysadmin.

Since virtualization is a relatively new workload for mainstream operating systems, Linux historically hasn't supported guests very well. This has been changing slowly, as performance improvements for User-Mode Linux (UML) have made their way into mainline. This situation is also about to change rather more quickly, as a kernel virtualization project has started, with the goals of supporting lightweight containers, such as OpenVZ and vserver, allow workloads to be containerized so they may be migrated easily, speed up UML by implementing some virtualization on the host, and provide support for resource management and control.

I will talk about the evolution of Linux hypervisor and virtualization support and where I see it going in the future. The historical aspect of this will be largely from the point of view of UML, since it is the only virtalization technology which uses the facilities of a standard Linux kernel to support fully virtualized Linux guests. Improvements to ptrace will be prominent, as it is central to Linux virtualization support. There are other helpful new facilties which are not primarily considered to be useful for virtualization, such as AIO, direct I/O, FUSE (Filesystems in USErspace), and MADV_REMOVE.

AIO and direct I/O allow for I/O and memory use improvements by eliminating the double-caching of data that is normally required when doing file I/O. FUSE, while not directly applicable to virtualization, turns out to enable some new management capabilities by allowing a UML filesystem to be exported to the host, where some important management tasks can be performed without needing access to the UML. MADV_REMOVE enables memory hot-plug for UML, which allows the host memory to be managed more efficiently.

These, and other new capabilities, while nice, are incomplete, and I will describe what is needed in the future, and how UML would make use of them.

Finally, I will describe the new virtualization infrastructure project with an emphasis on how it is useful to stronger virtualization technologies such as UML. While this is non-obvious, it turns out that the same facilities which can enable Linux to support lightweight containers such as vserver can also be helpful to full-fledged guest kernels. A prototype addition to this project allows UML processes to execute the affected system call (getttimeofday) at 99% of host speed. As this project is fleshed out, I expect similar performance from other important sets of system calls.

About the speaker:

Jeff Dike graduated from MIT, and went to work at Digital Equipment Corp, where he met a number of people who would go on to become prominent in the Linux world, including Jon Hall and a large contingent which now works at Red Hat. He left Digital in 1993 during the implosion of the mini-computer market. He spent the next decade as an indepentent contractor, and became a Linux kernel developer in 1999 after conceiving of and implementing UML. Since then, UML has been his job, becoming a full-time paid one in mid-2004 when he was hired by Intel.

Linux on the Cell Broadband Engine
by Ulrich Weigand and Arnd Bergmann
Thursday, 2006-09-07 11:30-12:15

The Cell Broadband Engine Architecture, jointly developed by Sony, Toshiba, and IBM, represents a new direction in processor design. The Cell BE processor features in addition to a PowerPC-compatible PowerPC Processor Element (PPE) an array of eight Synergistic Processor Elements (SPEs) supporting a new SIMD instruction set that operates on 128 vector registers. The SPE memory architecture is characterized by a directly addressable 256 KB local storage area plus an explicitly programmable DMA engine to access main memory. Typical applications that benefit from this architecture are in the areas of game, media, and broadband workloads.

To exploit the capabilities of the Cell BE architecture, an application will use both PPE and SPEs, with computational kernels running on one or multiple SPEs, and the PPE orchestrating computation and data flow. The Linux operating system has been extended to support this new type of applications, in addition to regular PowerPC user space code. To that purpose, new kernel interfaces and user space libraries providing access to SPEs have been created. A port of the GNU tool chain allows code generation for the SPE instruction set. Most of the kernel changes are included in recent releases. For tool chain changes, work on upstream integration is still in progress.

In the paper, we will present an overview of the Linux kernel changes required to support Cell BE applications. We will also discuss the user space API and ABI that allows to build applications comprising both PPE and SPE components. The question how to debug such applications will also be addressed. Finally, we will talk about future enhancements to Cell BE support.

About the speakers:

Arnd Bergmann works for the IBM Linux Technology Center in Böblingen, Germany. He currently maintains the Linux kernel platform code for the Cell Broadband Engine Architecture.

Before joining IBM in 2002, he studied computer engineering in Osnabrück, Germany and in Espoo, Finland. He has been active in the Linux community for about eight years now, with the major contributions in the areas of the System z architecture (aka s390), 64-bit platforms in general, as well as digital media.

Dr. Ulrich Weigand works for the IBM Linux Technology Center in Böblingen, Germany, where he is currently working on the GNU toolchain for the Cell Broadband Engine Architecture.

After receiving a Ph.D. at the Chair of Theoretical Computer Science at the University Erlangen-Nürnberg, he joined IBM in 2000. He has since been working on the port of Linux to the System z architecture, with primary responsibility for the GNU compiler and toolchain for that platform. He is maintainer of the System z back-end in both the GNU compiler and debugger.

Personal Firewalls for Linux Desktops
by Andreas Gaupmann
Thursday, 2006-09-07 12:15-13:00

A personal firewall differs from a traditional firewall that filters traffic between an untrusted and a trusted network. While network firewalls are installed on routers and bastion hosts, personal firewalls are installed on desktop systems. Another distinction can be made with respect to the subject to protect. The former aim to protect a network whereas the latter are designed to protect a user. In this paper, it is shown how a personal firewall for a Linux desktop can be implemented.

Linux as foundation for desktop systems is on the rise. An increased use of Linux on desktops will also lead to more security incidents caused by viruses, worms, and trojans targeted specifically or also at Linux systems. Furthermore, a personal firewall is an important building block in an overall security architecture that aims to protect specifically the user of a desktop system.

The strict separation of kernel space and user space in Linux systems necessitates a layered architecture of the personal firewall. The enforcement of filter rules is only possible in the kernel (enforcement layer). For this purpose, a kernel module has been implemented which uses the Linux Security Modules (LSM) framework of the kernel. The other two components are implemented in the user space. The decision layer as well as the graphical user interface of the personal firewall are both located in user space.

The decision layer of the personal firewall may be described as event based access control. In this model, security events are actions of applications that might lead to the compromise of the host or a disclosure of private user data. Four types of security events are filtered in order to prevent these breaches of security. Application starts are controlled in order to prevent the start of untrusted programs. Additionally, it is checked whether executables have been replaced by comparing checksums of files. Incoming connections are filtered to regulate the access to local services from remote hosts. Outgoing connections are controlled to prevent applications from connecting secretly (unauthorized by the user) to remote hosts.

Security events are allowed or denied by evaluating rules in a SQLite database. If a matching rule is not found in this database, then the user is asked to decide the verdict on the security event. The graphical user interface provides a user-friendly way for managing security events.

About the speaker:

Andreas Gaupmann is a graduate student at the University of Applied Sciences at Hagenberg, Austria. His field of study is "Secure Information Systems". Recently, he has finished his diploma thesis "Design and Implementation of a Personal Firewall".

He is holding a Bachelor degree in Computer Science from the University of Applied Sciences at Hagenberg, Austria. The topic of a Bachelor thesis was "Secure Programming - Buffer Overflows, Race Conditions, and Format String Attacks".

Moreover, he is the auther of a patch for OpenSSH that enables user authentication according to a Zero Knowledge protocol. The website of the project is located at http://zk-ssh.cms.ac/. He has presented the results of these work at the IT security conference "Sicherheit 2006" in Magdeburg.

Smart Card Technology and Linux Integration
by Heiko Knospe
Thursday, 2006-09-07 12:15-13:00

This paper discusses the use of hardware security modules with Linux based host systems. Microprocessor based integrated circuit cards with cryptographic capabilities (smart cards) are already well established security modules, and new type of tokens (with USB, MMC and contactless interfaces) have evolved.

Smart cards can be used for a variety of (mostly security-related) applications: identification, authentication, signature, encryption, secure key and data storage etc. Smart cards are connected via an interface device (reader) to a host system (e.g. a PC). The use of smart cards requires an on-card application, a reader driver and host middleware and software.

During the last couple of years, a number of projects developed software, middleware and drivers for Linux on the host side. The paper analyses major on-card and off-card architectures and implementations, and explains their interplay:

  • Different type of cards or tokens and (quasi-)standards (ISO 7816, Global Platform, GSM SIM, PKCS#15, Java Card, ...)
  • Reader drivers (OpenCT and PC/SC architecture)
  • Interface standards and APIs (in particular PKCS#11)
  • High-level APIs and libraries (e.g. Open Card Framework)
  • Software, tools and smart card enabled applications (OpenSC, MuscleCard, PAM modules, OpenSSH, OpenSSL, Mozilla, ...)

The paper concludes with an outlook on trends in hardware security modules and their applications.

About the speaker:

Heiko Knospe is a Professor for Mathematics and IT Security at Cologne University of Applied Sciences (FH Koeln). His research interests include security of Next-Generation-Networks, AAA protocols, mobile security and cryptographic tokens. He conducted a number of projects in these fields.

Benchmarking, round 2: I/O performance
by Felix von Leitner
Thursday, 2006-09-07 14:30-15:15

In the last round of benchmarks, presented at Linux Kongress 2003, I showed some benchmark results for the BSDs and Linux, mostly concerning scalable network programming. The results have led to marked improvements in scalability on most players.

This round of benchmarks will try to do the same, but for I/O performance. We have taken a real life data set from a high volume production system, and replayed several thousand HTTP requests. We also had more modern hardware (gigabit ethernet, SMP) at our disposal, and tried to exploit it with varying success.

The results surprised us in several cases, and provide some interesting lessons to be learned. This round, we also measured some commercial operating systems.

About the speaker:

Felix von Leitner has been involved with Linux since version 0.98 and has focused on scalability and high performance for years.

In his professional life, he consults companies about IT security for a small security company called Code Blau he co-founded. He spent most of this year doing code audits of commercial software.

Samba status update
by Volker Lendecke
Thursday, 2006-09-07 14:30-15:15

Samba 3 has undergone quite a number of changes in the last months, and we are still changing it rapidly. Hopefully for the better. In this talk I will present the latest development in detail, such as:

  • The handling of users, groups and SIDs has been completely re-worked. A consequence is that nested groups ("local" groups in Windows-speak) now really work.
  • Management of users and groups is done by a new "net sam" utility, "net groupmap" is deprecated.
  • Clustering support is undergoing quite a bit of development right now, possibly at the time of the conference I can give a live demonstration of what we are doing. If not, I will present current status.
  • Another field of work is remote management and monitoring. Depending on how development goes I will present what is being completed at the time of the conference.
  • We have put a quite bit of effort into porting back some Samba4 infrastructure to Samba3. Transaction-based talloc and auto-generated MS-RPC stubs using PIDL are examples. I will present the current status in this area.
About the speaker:

Volker Lendecke is Samba Team member since 1994, and has been active in Samba development since that time. Volker is also co-founder of SerNet Service Network GmbH in Göttingen, Germany and doing lots of Samba consulting, development and trouble shooting there.

File System (Ext2) Optimization for Compressed loopback device
by Kenji Kitagawa
Thursday, 2006-09-07 15:15-16:00

We developed a file system optimization tool "Ext2optimizer" that re-arrange data-blocks for a compressed loopback block device.

In recent years, block device level compression is widely used especially for live CD, such as “cloop” and “SquashFS”. Although it requires runtime decompression, the de-compression process is faster than reading uncompressed data because bandwidth of CD-ROM is narrow and current CPU is fast.

However there is block size gap between file system (4KB is default on ext2) and compressed loopback device (64KB is a default on cloop). It causes redundant read access and slow boot time. Furthermore file system doesn't consider suppression of disk-seek on a compressed block device.

Ext2optimizer re-arranges the data-block on ext2 and groups together in order to put into fewer blocks of compressed block device. It reduces block-accesses and disk-seek. The optimization is based on a profile of reading files. It makes fast boot if a profile is taken at boot time. It also makes fast execution if a profile is taken at run time of the application.

Ext2optimizer doesn't change the ext2 format and allows using as a normal ext2 file system. Current version treats only data-block and meta-data doesn't change. The optimization is not logically perfect but the experiment shows the effectiveness on live CD and other boot methods.

We applied Ext2optimizer to KNOPPIX 3.8 Japanese Edition. We took a profile at boot time and optimized the ext2 file system on the cloop. The original version required 68MB data-read and took 202 seconds at boot time. The optimized version required 45MB data-read and took 146 seconds at boot time. The optimization could reduce 66 % data-read and 72% boot time.

Ext2optimizer is also effective on HTTP-FUSE KNOPPIX which is an internet thin client. HTTP-FUSE CLOOP uses a network loopback block device and saves each compressed block to a file. The Compressed block size grows larger in order to reduce number of downloads. So the current compressed block size of HTTP-FUSE CLOOP is 256KB as default. So current HTTP-FUSE KNOPPIX required 170MB block files at boot time. Ext2optimizer could group together the necessary ext2 data-blocks and made 40MB block files of HTTP-FUSE CLOOP. Because of the dramatic decrease in the number of download files, it makes fast boot and reduces load of servers. This feature shows the scalability of network boot such as HTTP-FUSE KNOPPIX.

About the speaker:

Kenji Kitagawa(1), Tan Hideyuki(1), Daisuke Abe(1), Daisaku Chiba(1) Kuniyasu Suzaki(2), Kengo Iijima(2), Toshiki Yagi(2)

Mail:
(1){kitagake, tanh, abeda, chibad}@alpha.co.jp
(2){k.suzaki, k-iijima, yagi-toshiki}@aist.go.jp
(1) Alpha Systems Inc.
(2) National Institute of Advanced Industrial Science and Technology

Linux Clients in Microsoft Windows (ADS) Environments
by Lars Müller and Günther Deschner
Thursday, 2006-09-07 15:15-16:00

The presentation demonstrates how Linux workstations can be seemlessly integrated into Microsoft environments using todays standard graphical user interfaces as shipped by the Linux vendors.

Integrating into the Windows world nowadays means integrating into Microsoft's Active Directory Service (ADS) and addressing it's security requirements. To implement the holy grail of Single Sign On Authentication on Linux, the authors - both members of the Samba Team - worked on expanding the existing Samba winbind infrastructure, researched various aspects of the Windows Client-Server interaction and also coordinated the required changes to involved Open Source Projects such as GNOME and KDE.

During the talk, the authors will focus on the Samba winbind daemon which provides the interface to all relevant core technologies like the Directory, Kerberos, PAM, and NSS. Key features as the Kerberos based single sign on and the offline authentication - which is important for mobile or home office usage - will be demonstrated using typical corporate domain infrastructures. In addition, the talk will point out how winbind enforces ADS wide security policies by applying login restrictions, communicating password policy requirements and lockout restrictions.

The presentation will point out the motivation, the mandatory and supplementary requirements, the architectural overview, the open issues, and the currently available resources of that Single sign On integration. The authors will also describe the obstacles of implementing a modern, directory based security model on top of a Unix based operating system and discuss how the core authenticating layers such as PAM and NSS are suited to fulfill todays logon tasks.

During the talk, a live demonstration will be given to give a brief overview on the typical use cases, this will include the mandatory domain join process, domain logons, Kerberos ticket handling, seamless authentication, access to domain wide services (web, file) and a password change process with various security policies in effect.

About the speakers:

Günther Deschner has started to work with Linux and OSS projects as IT Consultant for SuSE Linux Solutions AG in 2001. He was involved in various large Linux customer projects and focussed on Interopability with Windows, Directory Services and Security. He worked for Service Network (SerNet) GmbH Göttingen (Germany) and became a member of the Samba Team in 2003. He is now working as Senior Software Engineer in the SUSE Labs. His recent project is the Single Sign On integration of Linux Clients in Active Directory based environements. Günther is living with his family in Berlin, Germany.

Lars Müller started to work with Linux in the mid nineties at the University of Göttingen. There he was involved in the dormitory and general student network. This in particular required the operation of a distributed diskless Linux client pool. He joined the SUSE Linux business support department in 2000 while being a volunteering SUSE Linux beta tester for several years. As the maintainer of the Samba package he was involved in the development of SUSE Linux based business products and switched to the development department in 2002. There he's still working as a Senior Software Engineer in the SUSE Labs. He became a member of the Samba Team in 2005.

dm-raid45 - A Device-Mapper target for RAID4 and RAID5 mappings
by Heinz Mauelshagen
Thursday, 2006-09-07 16:30-17:15

Device-Mapper, the Linux 2.6 kernel generic device-mapping facility, is capableof mapping block devices in various ways (eg. linear, striped, mirrored). The mappings are implemented in runtime loadable plugins called mapping targets, which take a logical device address and sector and remap it to one or more underlying (logical) device(s).

Targets can be implemented to support arbitrary software RAID solutions on Linux 2.6, such as RAID4 and RAID5.

dm-raid45 is a new device-mapper target (i.e. a mapping plugin) to drive RAID (Redundant Array of independant disks) sets.

RAID level 4 with a dedicated parity drive and RAID level 5 with rotating parity are both supported by this target. It is capable of doing background synchronization of sets utilizing a dirty-log, which keeps track of dirtied regions (i.e. those with writes in flight).

At set activation, the dirty-log is questioned and resynchronization is started for any dirty regions one-by-one in order to regain the consistency of the RAID set i.e. after a system crash.

The same mechanism is used to set a new RAID set up by dirtying all regions and activating the set, hence causing resynchronization of all regions of the new set.

Full read/write access is possible to the whole RAID set during resynchronization.

In case one disk fails within a set, the set will be degraded and access to all data is still possible. The defective drive can be replaced and dm-raid45 be configured to resynchronize the new drive.

My talk will give a short device-mapper architecture/feature overview and elaborate on the dm-raid45 target feature set.

About the speaker:

Heinz Mauelshagen is the Linux Logical Volume Manager author and works on related topics like device-mapper and dmraid for Red Hat.

Development in OpenOffice.org
by Florian Reuter
Thursday, 2006-09-07 16:30-17:15

OpenOffice.org is the largest and most demanding application running on the Linux desktop today. This talk will highlight some of the challenges faced by OO.o - running on (m)any distros, across desktop environments, etc. We'll get a view into the feature depth, and why having so many features is actually important for interop.

We will also see some of the recent performance improvement work, and go around a development iteration before your eyes to show it's possible to build and hack on OO.o. Finally you'll be persuaded that OO.o is the most critical piece driving Free software adoption on the desktop.

About the speaker:

Florian Reuter is a Software Engineer who has been working on the OpenOffice.org Word import and export filters. Prior to this he has be doing research on fast XML validtion and Schema-based language binding. He is a member of the OpenDocument TC and also CTO of the OpenDocument foundation. Currently he is focused on the "meta data" enhancements for OpenDocument and its impact to Service Oriented Architectures.

iSCSI in Linux
by Hannes Reinecke
Thursday, 2006-09-07 17:15-18:00

The iSCSI layer in Linux, Open iSCSI, supports a wide range of technologies including software iSCSI, iSER, traditional host bus adapters and even target mode drivers. To enable this flexibility the Open iSCSI design has quickly evolved over the last year, but the primary goal of placing the data path in the kernel and pushing the session management and iSCSI state machine to user space has remained unchanged.

With the data paths in the kernel, users have been able to enjoy a high performance software initiator, and at the same time developers have been able to quickly extend and debug the control path. However, the design has several drawbacks. A driver that must communicate between the kernel and user space to just change the field of a object is strongly dependent on a stable API. Small changes, can cause great pain to users and developers. And, because a swap device could be accessed through the iSCSI layer, great care must be taken to make sure IO can always make forward progress.

This paper will focus on the Open iSCSI design, problems that have been encountered due to the Open iSCSI data and control path split, the state of iSCSI in Linux and future improvements.

About the speaker:

Hannes Reinecke studied Physics with main focus image processing in Heidelberg from 1990 until 1997, followed by a PhD in Edinburgh's Heriot-Watt University. He worked as sysadmin during the studies, mainly in the Mathematical Institute in Heidelberg.

He's a Linux addict since the earliest days (0.95); various patches to get Linux up and running. Now working for SUSE Linux Products GmbH to support IBM's S/390 architecture on Linux. Main points of interest are (i)SCSI, multipathing, udev and device configuration. And S/390, naturally. Plus occasional maintainer of aic79xx driver.

Playing BlueZ on the D-Bus
by Marcel Holtmann
Thursday, 2006-09-07 17:15-18:00

The integration of the Bluetooth technology into the Linux kernel and the major Linux distributions has progressed really fast over the last two years. The technology is present almost everywhere. All modern notebooks and mobile phones are shipped with built-in Bluetooth. The use of Bluetooth form a Linux based system is easy and in most cases it only needs an one-time setup, but all the tools are still command line based. In general this is not so bad, but for a greater success it is needed to seamlessly integrate the Bluetooth technology into the desktop. There have been approaches for the GNOME and KDE desktops. Both have been quite successful and made the use of Bluetooth easy. The problem however is that both implemented their own framework around the Bluetooth library and its daemons and there were no possibilities for programs from one system to talk to the other. With the final version of the D-Bus framework and its adaption into the Bluetooth subsystem of Linux, it will be simple to make all applications Bluetooth aware. The idea is to establish one central Bluetooth daemon that takes care of all task that can't or shouldn't be handled inside the Linux kernel. These jobs include PIN code and link key management for the authentication and encryption, caching of device names and services and also central control of the Bluetooth hardware. All possible tasks and configuration options are accessed via the D-Bus interface. This will allow to abstract the internals of GNOME and KDE applications from any technical details of the Bluetooth specification. Even other application will get access to the Bluetooth technology without any hassle.

The talk will introduce the new D-Bus interface for Bluetooth and demonstrate how easy it can be used from within different programming and scripting languages. Besides the currently established interface it will also cover the planned extensions. Devices like the Nokia 770 are already using D-Bus for the Bluetooth technology and this talk will also cover this device and other Linux based embedded device that are using Bluetooth. Since the D-Bus is an open standard this talk will also cover how other operating systems like FreeBSD or OpenSolaris might be able to use it and what is needed from the client applications to support these operating systems.

About the speaker:

Marcel Holtmann is one of the core developer of the official Linux Bluetooth stack which is called BlueZ and started working with the Bluetooth technology back in 2001. His work includes new hardware drivers and upper layer protocol implementations. Another part is the integration of Bluetooth into other subsystems of the Linux kernel, libraries and tools. In January 2004 he overtook the primary maintainer role from the original author Max Krasnyansky. Together with Jean Tourrilhes he also maintains the OpenOBEX project. And he was also responsible for the IrDA and Bluetooth integrations of the Gnokii project.

Linux Kernel - How is it being developed and what's coming next?
by Theodore Ts'o
Friday, 2006-09-08 9:30-10:15

The Linux kernel development model is unique compared to other Open source Projects. This talk will explore how it has changed over the years, why those changes were made, and the strengths and weaknesses of our current approach. In addition, this talk will also give a broad-eyed view of recent changes to the kernel and the general direction of future development in the Linux kernel.

About the speaker:

Theodore Ts'o has been a C/Unix developer since 1987, and has been a Linux kernel developer since September 1991. He led the development of Kerberos V5 at MIT for seven years, and is the primary author and maintainer of the ext2/ext3 filesystem utilities. Theodore currently serves on the board of the Free Standards Group and contributes to the development of the Linux Standard Base. He currently is a Senior Technical Staff Member with the IBM Linux Technology Center.

Best Practices in Linux Kernel Testing
by Poornima Bangalore
Friday, 2006-09-08 10:15-11:00

Linux is increasingly becoming a common open source Operating System. Off late, it has been used for running many critical and real time applications. Hence, testing the Linux kernel is gaining more importance than ever. But there are many challenges in testing the Linux kernel because of the way Linux releases work. This paper talks about the challenges faced in testing the Linux kernel and some of the best practices that can be followed to effectively test the same.

Linux Kernel testing is different from any other traditional testing. Main reason being it is opensource. When it comes to traditional testing where proprietary software is involved, test teams are always kept up-to-date about the features that are present in the new software and test cases are provided for testing. There is a closed environment. But in case of a opensource software, there is no coordination. Different unknown people across the world will be involved in development or testing. Lot of changes keep getting into the different kernels that get released. Keeping a tab on all these changes and selecting the appropriate test based on this is a daunting task. This responsibility lies on the tester himself. The Kernel testing community uses certain techniques to handle these unique issues.

There are different sub trees in the Linux releases like stable kernels, rc kernels, mm kernels etc. Stable kernels are released every 4 – 6 weeks. RC kernels are available every week and there is no fixed schedule for mm kernels. Sometimes, one can expect as many as 3 mm kernels within a week’s timeframe. Given this type of a schedule, it becomes very difficult to systematically and completely test a given kernel. One has to adopt different strategies to speed up the testing activity and at the same time have an efficient coverage of the test suites.

Testing the mainline kernels thoroughly and eliminating bugs as early as possible becomes very important as they will later on be picked up by different distributions. Making the mainline kernels bug free and preventing them from sneaking into the distros is the first step in ensuring that the distributions are more stable.

Building the linux kernel on different types of hardware is itself a big challenge. Kernel configuration changes based on the different hardware that are present in a system like the SCSI disk, PCI cards, Ethernet adapters etc. Even a small mistake in selecting the right configuration will not let the kernel boot. Linux is available on a variety of platforms like x86, AMD, power etc. If testing has to be done on all these platforms, right kind of hardware has to be available. 2.6 version of the kernel has undergone a sea of changes compared to the 2.4 kernel but not all the testcases have changed accordingly. This might leave many of the new features untested or not thoroughly tested. Testers need to delve deep to bridge these gaps.

In our project, we have evolved ourselves to overcome these challenges This paper will describe in detail the best practices and some of the automation techniques that are being followed to efficiently perform the Linux Kernel Testing.

About the speaker:

Poornima B N holds a MS degree in System Software from BITS, Pilani, India. She is currently leading a team in testing the Linux distributions and kernels. Poornima has around 8 years of experience in IT industry and has worked with IBM throughout her career. During her tenure, she has worked on various testing and support projects. Her areas of interest are test automation and operating system internals.

Raghuveer Raghavendra is a co-author of this paper. He holds a B.Tech degree in Computer Science from the Visveswaraiah Technological University. Raghuveer did his engineering at The National Institute of Technology, Mysore. Since he joined IBM in Jul 2005, he has been working at the LTC (Linux Technology Center) Test team, testing main line Linux kernel

Trusted Boot of HTTP-FUSE KNOPPIX
by Kuniyasu Suzaki
Friday, 2006-09-08 10:15-11:00

We developed a trusted network loopback block device and integrated it to HTTP-FUSE KNOPPIX. It keeps the chain of trusted boot; TPM-enable BIOS, Trusted boot loader (Trusted GRUB) and trusted block access of root file system on the network loopback block device.

HTTP-FUSE KNOPPIX intended to be Internet Thin Client. “AutoConfig” of KNOPPIX can detect devices and setup drivers on any PC. Network loopback block device “HTTP-FUSE CLOOP” enables us to get necessary blocks of root file system via HTTP servers. So, the initial boot image could be small(6MB). Users can try KNOPPIX without downloading of full CD(700MB)/DVD(4GB) image. It also makes easy to customize KNOPPIX, because the root file system is dynamically updateable on HTTP Servers. Unfortunately the current HTTP-FUSE KNOPPIX has no security function. It is easy to interfuse fraud block files.

To solve the problem we developed trusted network loopback block device. The loopback device is reconstructed with small block files on HTTP servers. Block files are made from each split block device by fixed size (default is 256KB). The name of each block file is a unique hash value (SHA1) of its contents. The block file names are listed in an “index” file. The driver gets an “index” file with safe way (https or other way) and sets up loopback block device. The driver checks validity of contents with its SHA1 file name, when a block is accessed and a relevant block file is read. So the block files could be distributed via un-trusted ways (Proxy cache, etc).

We integrate the trusted network block device to trusted booted Linux. The initial boot was measured by secure chip “TPM” , “Trusted-GRUB” and Trusted Linux kernel. After booting kernel, the root file system is measured by developed trusted loopback block device at run time. It can keep the chain of trusted boot.

Current implementation depends on Atmel’s TPM chips (v1.1 or v1.2), Trusted-GRUB, Linux kernel 2.6.17 with TPM driver, and the trusted network loopback block device. The bandwidth of the current trusted block driver is 20MB/sec, which includes block contents validation with SHA1 file name. It is faster than CD/DVD-ROM drive.

As future plan, we integrate the trusted boot methods to Xenoppix that is a combination of Virtual Machine Monitor “Xen” and KNOPPIX. It makes boot several OSes (OpenSolaris, MINIX, Plan9, etc) on the trusted way.

About the speaker:

Kuniyasu Suzaki, National Institute of Advanced Industrial Science and Technology (Japan), Graduated from Tokyo University of Agriculture and Technology.

I maintain KNOPPIX Japanese edition. I am interested in Linux Emulator and virtual machine. So I customized KNOPPIX for UserMode-Linux, coLinux, qemu and Xen. I also interested in Internet boot OSes and developed SFS-KNOPPIX and HTTP-FUSE KNOPPIX.

Where is the kernel memory going? Memory usage in the 2.6 kernel
by Andi Kleen
Friday, 2006-09-08 11:30-12:15

The original Linux 1.0 kernel ran fine on a PC with 4MB memory. Of that the kernel used a small fraction. Later versions weren't as modest in memory requirements. There have been also some complains that the 2.6 kernel needs more memory than the 2.4 kernel.

Often analysis of memory usage has focused on code size only. This paper focuses instead on runtime allocated memory of the kernel. User space memory consumption will not be covered.

The Linux kernel has various memory allocation subsystems: page allocator, bootmem, slab, kmalloc.

First there will be a short overview of these allocators and their interdependencies. Also basic overhead in their use of memory will be identified.

In addition there are various caches that work on top of these allocators and impact memory usage.

Identifying run time usage is difficult because the kernel follows the "free memory is bad memory" motto and tries to keep all memory filled with caches.

When the kernel runs out of memory while allocating something it gets into a "memory pressure" state. Then some of the caches are shrunk and their memory can be returned to other purposes. So it is important to identify the overhead of basic data structures that are left over after the caches are shrunk. However many caches double as data structures for currently used objects too and so they cannot be completed ignored. This paper will present some heuristics to estimate basic overhead of caches under memory pressure.

Then it will show some numbers how much memory the kernel needs for its data structures under some standard use scenarios, and how much the various caches need under memory pressure.

For that some tracking tools like systemtap scripts are used to identify which subsystem in the kernel allocates how much memory.

Finally there will be some suggestions for improvements and areas to concentrate on to get memory usage improvements.

About the speaker:

Andi Kleen is a long term Linux kernel hacker. He's been originally involved with network code, but has worked on many areas of the Linux kernel over time.

In the last years he has maintained the x86-64 architecture port.

He works as a kernel developer for SUSE Labs/Novell and lives in Bremen.

Linux HA v2 - One CRM to rule them all
by Lars Marowsky-Brée
Friday, 2006-09-08 11:30-12:15

Linux HA (heartbeat) version 2 provides a very powerful user-space cluster resource manager (CRM) for high availability clustering (fail- and switch-over) in a cluster of two to sixty-four nodes. It has a rich dynamic dependency scheme between the resources. Resources have progressed beyond simple one-instance-on-one-node objects, namely groups, clones (N instances on M nodes) and master/slave (K masters chosen from the clones) - all of which can be affected by time and the phase of the moon - and all this is exported either via a new GUI, raw XML or even CIM. This provides for flexible system automation and interesting new opportunities for complex support case escalations.

This presentation will introduce the project to the audience and hint at, from the 10-mile-high view, how these advanced feature are used to model and manage the current buzz:

  • OCFS2 with membership orchestrated from user-space via clones;
  • virtual clusters of Xen guests;
  • layered/stacked clusters of clusters (ie, virtual guests forming a cluster on top of a cluster of physical machines);
  • replicated databases and storage using the master/slave model.

Scaling down, some thoughts will be voiced on how clusters of one node could benefit from these features, as a cross-over between init and cron on steroids.

About the speaker:

Lars Marowsky-Brée is employed at the SUSE Labs inside Novell, working on High Availability and Cluster related topics, ranging from Cluster Resource Management, Multipath IO, Cluster Filesystems to Cluster Administration.

Using Linux since 1994 in various roles ranging from user, BOFH, consultant to developer, he gained a lot of real-life experience about the various reasons for service outages and the one common factor. He soon began to appreciate the complexities in keeping a service running in the face of malicious software, possessed hardware, well-intentioned users and the world at large and loves to rant about it; this has kept him employed and invited to conferences ever since, having presented at various major Linux conferences in the past.

In early 2000, he took the chance to work on Linux High Availability exclusively and joined S.u.S.E. Being a natural pessimist, he finds that clusters work just as he expects.

Speeding Up Thread-Local Storage Access in Dynamic Libraries in the ARM platform
by Glauber de Oliveira Costa
Friday, 2006-09-08 12:15-13:00

As multi-core processors become the rule rather than the exception, multi-threaded programming is expected to expand from its current niches to more widespread use, in software components that have not traditionally been concerned about exploiting concurrency. Accessing thread-local storage (TLS) from within dynamic libraries has traditionally required calling a function to obtain the thread-local address of the variable. Such function calls are several times slower than typical addressing code that is used in executables. While instructions used in executables can assume thread-local variables are at a constant offset within the thread Static TLS block, dynamic libraries loaded during program execution may not even assume that their thread-local variables are in Static TLS blocks.

Since libraries are most commonly loaded as dependencies of executables or other libraries, before a program starts running, the most common TLS case is that of constant offsets. Recently, an access model that enables dynamic libraries to take advantage of this fact without giving up the ability to be loaded during execution was proposed and succesfully implemented on IA32, AMD64/EM64T and Fujitsu FR-V architectures. On these systems, experimental results revealed the new model consistently exceeds the old model in terms of performance, particularly in the most common case, where the speedup is often well over 2x, bringing it nearly to the same performance of access models used in plain executables.

This speech details this new access model and its implementation for ARM processors, highlighting its particular issues and potential gains in embedded systems.

About the speaker:

Glauber de Oliveira Costa has been a Free Software enthusiast since 2001 when he first got contact with GNU/Linux. Undergraduate student at University of Campinas since 2000, changed his course to Computer Engineering in 2005. Such a change was primarily driven by his passion for Free Software.

In the year of 2004, begun working at IBM Linux Technology Center, his job until these days. Among his contributions, he highlights some patches to the Linux Kernel, and some quite small contributions to other Free Software projects such as the Xen Hypervisor, SELinux tools and Gaim Instant Messager. More recently, has been involved with the GNU Toolchain (gcc + glibc + binutils) porting the new Thread Local Storage ABI from i386, AMD64 and FR-V to ARM processors. Being relatively new to the subject, Glauber is glad to consider himself Alexandre Oliva's most dedicated padawan.

Managing enterprise data-centers with openQRM
by Matthias Rechenburg
Friday, 2006-09-08 12:15-13:00

High-availability, scalability, fast and flexible deployment, service- and server-virtualization are the most serious key-parameters for modern, enterprise data-centers today. This new, open-source projects intention is to combine and integrate all kinds of useful system management tools within a unified and generic data-center management User-interface.

openQRM is an open source system management platform which integrates existing components to manage enterprise data centers efficiently. Its architecture extends to and embraces other existing data center technologies through “plug-ins”. The out-of-the-box functions include, but are not limited to, provisioning, monitoring, reporting and high availability. Other functionality and integration with third-party software like VMWare and Xen as partitioning technologies, ldap-Authentication, Iscsi (booting from Iscsi-targets) and more are available.

openQRM manages physical servers and virtual machines, seamlessly and automatically. It assigns server-resources to applications according to defined policies, dynamically adjusts the amount of allocated servers according to actual usage and provides high availability for enterprise services and applications. A secure web interface, which provides easy management and control, combined with powerful command line tools allows to deploy servers and services with a single mouse-click. In the openQRM-server a generic, logical layer, called “partition engine”, conforms all different kinds of virtualization technologies. This partition-engine provides a virtualized server-resource from the type “partition” which is then used in the same way as a physical system.

This presentation deals with common aspects and features of openQRM and gives a detailed overview of its capabilities and flexibility in managing enterprise data-centers. It covers its open-source license model and the community, build- and installation of the openQRM-server, management of the available resources, provisioning and high-availability, explains the pluggable architecture and gives examples how various plug-ins are enhancing the functionality of the server and GUI.

About the speaker:

Matthias Rechenburg is the author and project manager of the openMosixview- and kiscsiadmin project. Since some years he is involved in all kinds of data-center related open-source projects like high-performance and high-availability clustering, consolidation, network and enterprise storage management. Currently, his most serious interests are about the virtualization technologies, their features and capablilites and their integration by a unified virtualization layer. He lives in Bonn, Germany, and is working as a freelancer developing for Qlusters, actively maintaining some of the openQRM internals and plug-ins. Mostly, he enjoys to code in his home-lab but also likes traveling, meeting other linux-people and joining all kinds of linux-related events.

Real-Time Linux
by Theodore Ts'o
Friday, 2006-09-08 14:00-14:45

"Linux and Real-Time?" Sounds like a contradiction in terms? So was "Linux on non-x86 architectures" and "Linux and SMP" at one point. Today, using the latest patches, it is possible to get 50 microsecond or better latencies on real-time tasks on a real-time Linux kernel while the system is loaded by multiple kernel compiles, ping floods, and massive disk and network transfers occurring in the background.

This presentation will cover various historical approaches to provide hard- and soft- real time capabilities on Linux or Linux-like systems, what features are available in the mainline kernel, and patches such as Ingo Molnar's CONFIG_PREEMPT_RT. It will discuss what techniques are being used to achieve these results, and what the future may bring. Finally, it will explore some of the applications that can take advantage of a real-time Linux system.

About the speaker:

Theodore Ts'o has been a C/Unix developer since 1987, and has been a Linux kernel developer since September 1991. He led the development of Kerberos V5 at MIT for seven years, and is the primary author and maintainer of the ext2/ext3 filesystem utilities. Theodore chairs the board of the Free Standards Group and is treasurer of Usenix. He currently is a Senior Technical Staff Member with the IBM Linux Technology Center.

SELinux and AppArmor
by Ralf Spenneberg
Friday, 2006-09-08 14:00-14:45

Linux is not very secure by design. Only simple access controls protect the system from a possible intruder. Several large distributor try to fight these dangers using mandatory access controls (MAC). Red Hat/Fedora and Debian chose SELinux while SUSE preferred AppArmor. I will compare these two approaches and will point out the pros and cons of both.

About the speaker:

The Author has used Linux since 1992 and worked as a system administrator since 1994. During this time he worked on numerous Windows, Linux and UNIX systems. The last 5 years he has been working as a freelancer in the Linux/UNIX field. Most of the time he provides Linux/UNIX training. His specialty is network administration and security (firewalling, VPNs, intrusion detection). He has developed several training classes used by Red Hat and and other IT training companies in Germany. He has spoken on several SANS conferences and even more UNIX/Linux specific conferences.

The openSUSE Build Service: building software for your Linux system
by Michael Schröder
Friday, 2006-09-08 14:45-15:30

One of the biggest advantages of open source is the sheer number of available software and the fast update cycles. This advantage also comes with a cost: software authors have to make sure that their software runs on the current systems. Users demand binary versions of the software when compiling takes a long time or is difficult to do. But to keep all binaries up to date for the existing systems is very time consuming. Sometimes it is even impossible when the software author has no access to a wanted architecture or distribution.

The openSUSE build service is designed to provide software authors a means to easily build packages for multiple Linux distributions and have them automatically rebuilt if the distribution changes. Unlike other existing systems it is not centered around a single distribution type, but hosts multiple different build targets like SUSE, Debian, Fedora, Mandriva. Users can create their own projects and use a flexible stacking mechanism to base their project on packages of other projects. An automatic patch mechanism makes it easy to build different flavors of packages with different features or compiler options.

The build service features a public programming interface through a REST-based web service using XML over HTTP. The interface is designed for flexibility and easy integration with other tools and frameworks to make Linux software authors, packagers and users creating and deploying packages simple and fun.

The software build is done on a build client farm using the XEN hypervisor for sandboxing. Multiple package formats like deb and rpm are supported. Other features like the creation of XEN images and live/installation images are planned.

About the speaker:

Michael Schröder received his PhD from the University of Erlangen for his work on distributed virtual machines. He works at SuSE since December 2000.


Comments or Questions? Mail to contact@linux-kongress.org Last change: 2006-09-01