4.2. Implementation

To meet the above requirements the design is broken down into four major areas. Each of these areas will eventually be defined in detail on subsequent pages. Brief descriptions are provided below.

4.2.1. Application Interface Stubs

This portion deals with the code that provides the entry points into the libhpi.so. Initially this will literally just contain stubs that provide API compliance without any features. Stub APIs will output to the stderr a TODO: Implement _api_ message. As the api is implemented, these output messages will be removed.

4.2.2. Infrastructure

The core library is considered infrastructure. This is the section of code which provides and internal representation of resources and events, and exposes them as HPI data structures.

4.2.3. Utility Functions

The utility functions need by OpenHPI are provided by the glib utility library, this includes linked list and hash implementations. This is a well tested library used by many open source projects (including gtk, gnome, and linux-ha). It is thread safe, so proper utilization of the library for OpenHPI data structures will ensure that OpenHPI is thread safe as well.

4.2.4. OpenHPI Plugin Interface

The OpenHPI plugin interface is the method of communication between the OpenHPI infrastructure and real hardware. It is designed to be abstract enough to allow communication to any type of hardware over any sane interface (be that a device driver or network protocol).

4.2.5. OpenHPI Plugins

There exist a number of plugins in the OpenHPI source tree to enable different types of hardware and other interfaces. The current ones are listed below:

  • Dummy - a static plugin which provides a number or resources statically to test the infrastructure.

  • IPMI - an IPMI interface module based on the libOpenIPMI library developed by Correy Minyard.

  • Watchdog - an interface to the Linux watchdog device. Softdog can be used in place of a real hardware watchdog.

  • Text_remote - a remoting plugin which talks to the openhpi daemon on a remote machine. This allows multiple instances of hpi to be seen together in a single domain.