Friday, 18 January 2013
Saturday, 18 December 2010
Monday, 21 September 2009
openssl dgst: unable to load key file
For one of our applications we sign some files using an SSL private key:
$ openssl dgst -sha1 -sign signing-key.pem -out filename.sha1 filename
$ openssl dgst -sha1 -verify signing-cert.pem -signature filename.sha1 filename
unable to load key file
The problem is that you need to use the public key to do the verification, not the certificate. Thankfully it is easy enough to extract the public key from the certificate:
$ openssl x509 -in signing-cert.pem -pubkey -noout > signing-pub.pem
Then verification using the public key works as expected:
$ openssl dgst -sha1 -verify signing-pub.pem -signature filename.sha1 filename
Verified OK
Sunday, 7 June 2009
Ideas for a Tuxedo adapter
For enterprises that have legacy BEA Tuxedo systems, one of the main benefits you get from using BEA Weblogic (or now Oracle Weblogic?) as a J2EE product is the Weblogic Tuxedo Connector (WTC). WTC allows the J2EE container to connect to a Tuxedo domain in the same way that a normal Tuxedo domain does. You create domain gateways, can import and export services, apply ACLs, set domain passwords etc. This functionality is not available from any of the other J2EE vendors (eg. JBoss) as far as I can see which puts them at a significant disadvantage when bidding for business with these enterprise customers. (I am talking from personal experience here.)
This state of affairs surprises me though because it should be relatively straight-forward to make a simple bi-directional HTTP-based gateway. (The company I work for created a really simple unidirectional gateway for each direction which is even easier but rather clunky IMO.)
For calls from external applications to Tuxedo I'd have an HTTP server that is a Tuxedo client, and for calls going from Tuxedo to an external application a Tuxedo server than is an HTTP client. Then I'd create a simple XML representation of an FML32 buffer (which I did in a past life and I'm sure could pretty easily reproduce). Ideally this HTTP and Tuxedo functionality would be built into a single Tuxedo server binary that could just be deployed in a Tuxedo domain, but separate components could work okay as long as they were really easy to set up.
For the HTTP server I'd probably start with one of the webservers from acme.com - maybe a stripped down version of thttpd - or DJB's publicfile.
For HTTP clients there is libcurl or ACME's http_post or DJB's http@.
The Tuxedo client could either use the native API (if it is embedded in the Tuxedo server) or the workstation API (if it is in an external process), in which case it would connect to a workstation handler instead. The service called would be determined by looking at the incoming XML message and converting the rest of the message into an FML32 buffer.
The Tuxedo server should be configured to advertise any number of Tuxedo services and then create an XML message with the XML32 buffer contents and the service name. It should be possible to call a different URL based on what service is called.
For security there could either be none if the server is listening on localhost, or SRP or a pre-shared secret for a secure connection. If the applications are internal and no third-party verification is necessary I'd avoid using SSL - the password files used by SRP are much easier to manage than regularly expiring SSL certificates.
This state of affairs surprises me though because it should be relatively straight-forward to make a simple bi-directional HTTP-based gateway. (The company I work for created a really simple unidirectional gateway for each direction which is even easier but rather clunky IMO.)
For calls from external applications to Tuxedo I'd have an HTTP server that is a Tuxedo client, and for calls going from Tuxedo to an external application a Tuxedo server than is an HTTP client. Then I'd create a simple XML representation of an FML32 buffer (which I did in a past life and I'm sure could pretty easily reproduce). Ideally this HTTP and Tuxedo functionality would be built into a single Tuxedo server binary that could just be deployed in a Tuxedo domain, but separate components could work okay as long as they were really easy to set up.
For the HTTP server I'd probably start with one of the webservers from acme.com - maybe a stripped down version of thttpd - or DJB's publicfile.
For HTTP clients there is libcurl or ACME's http_post or DJB's http@.
The Tuxedo client could either use the native API (if it is embedded in the Tuxedo server) or the workstation API (if it is in an external process), in which case it would connect to a workstation handler instead. The service called would be determined by looking at the incoming XML message and converting the rest of the message into an FML32 buffer.
The Tuxedo server should be configured to advertise any number of Tuxedo services and then create an XML message with the XML32 buffer contents and the service name. It should be possible to call a different URL based on what service is called.
For security there could either be none if the server is listening on localhost, or SRP or a pre-shared secret for a secure connection. If the applications are internal and no third-party verification is necessary I'd avoid using SSL - the password files used by SRP are much easier to manage than regularly expiring SSL certificates.
Tuesday, 24 February 2009
Non-SSL cryptographic tunnels
Cryptographic tunnels are used to secure traffic between a client and server that communicate using a protocol that would normally be vulnerable to eavesdropping or man-in-the-middle attacks, where an attacker impersonates the other party. Instead of the client and server communicating directly they communicate via a local proxy and the traffic between the proxies is secured. Normally the client and server do not need to be changed, other than to configure where they connect to. Probably the best known software for securing TCP-based protocols is Stunnel.
Stunnel establishes a secure TCP tunnel using the Secure Sockets Layer (SSL) or Transport Layer Security (TLS) protocols. SSL/TLS has a handshake at the beginning of the connection where the client and the server authenticate each other using their public/private keypairs and agree on the symmetric cipher and session key to use for encrypting subsequent packets. This handshake is relatively resource and time consuming so SSL/TLS connections work best if the connection is left open for a relatively long period of time so that the handshake costs are not incurred very often. SSL/TLS does not work very well if the TCP-based protocol connects and disconnects for every transaction, for example. The overhead of re-establishing the TCP socket for each transaction is much much lower than re-establishing an SSL/TLS connection.
Another thing to consider is that when the tunnel endpoints are fixed or under the same entity's control, the overhead of the SSL/TLS handshake is unnecessary and just slows down the initial transactions sent over the tunnel. One of the main benefits of SSL/TLS is that the server and optionally the client are authenticated, but if the tunnel endpoints are known and trusted then authentication is not necessarily required. A symmetric cipher and session key can be agreed on beforehand, eliminating the need for this to be negotiated on each connection. Man-in-the-middle attacks are still not possible because all the traffic is encrypted and an attacker would not have access to the symmetric key. Denial-of-service is still possible, as it is with SSL/TLS, but this may be mitigated by the TCP-based protocol itself.
Cryptographic tunnels that rely solely on symmetric encryption with a pre-agreed cipher and key are thus useful in some scenarios. One of the simplest and best implementations I have found is Jess Mahan's ctunnel which is just a single C file linked to the OpenSSL crypto library.
I'd eventually like to write (or more likely cobble together using other people's code :-) an UCSPI compliant secure TCP proxy that works similarly to ucspi-proxy but can be used to provide secure tunnels based on various technologies - perhaps: SSL/TLS, SRP SASL, public-key algorithms like Curve25519, message authentication codes like Poly1305-AES, symmetric ciphers like AES-256 etc. and combinations of these. While hopefully being useful this will allow experimentation.
Stunnel establishes a secure TCP tunnel using the Secure Sockets Layer (SSL) or Transport Layer Security (TLS) protocols. SSL/TLS has a handshake at the beginning of the connection where the client and the server authenticate each other using their public/private keypairs and agree on the symmetric cipher and session key to use for encrypting subsequent packets. This handshake is relatively resource and time consuming so SSL/TLS connections work best if the connection is left open for a relatively long period of time so that the handshake costs are not incurred very often. SSL/TLS does not work very well if the TCP-based protocol connects and disconnects for every transaction, for example. The overhead of re-establishing the TCP socket for each transaction is much much lower than re-establishing an SSL/TLS connection.
Another thing to consider is that when the tunnel endpoints are fixed or under the same entity's control, the overhead of the SSL/TLS handshake is unnecessary and just slows down the initial transactions sent over the tunnel. One of the main benefits of SSL/TLS is that the server and optionally the client are authenticated, but if the tunnel endpoints are known and trusted then authentication is not necessarily required. A symmetric cipher and session key can be agreed on beforehand, eliminating the need for this to be negotiated on each connection. Man-in-the-middle attacks are still not possible because all the traffic is encrypted and an attacker would not have access to the symmetric key. Denial-of-service is still possible, as it is with SSL/TLS, but this may be mitigated by the TCP-based protocol itself.
Cryptographic tunnels that rely solely on symmetric encryption with a pre-agreed cipher and key are thus useful in some scenarios. One of the simplest and best implementations I have found is Jess Mahan's ctunnel which is just a single C file linked to the OpenSSL crypto library.
I'd eventually like to write (or more likely cobble together using other people's code :-) an UCSPI compliant secure TCP proxy that works similarly to ucspi-proxy but can be used to provide secure tunnels based on various technologies - perhaps: SSL/TLS, SRP SASL, public-key algorithms like Curve25519, message authentication codes like Poly1305-AES, symmetric ciphers like AES-256 etc. and combinations of these. While hopefully being useful this will allow experimentation.
Sunday, 22 February 2009
Watching /etc
I need to be able to watch the /etc directory of a host over time so that I know when any changes have been made. The main aims are to ensure that changes are not made by the sysadmins without our knowledge and to ensure that changes we do request are actually implemented. I also need to compare the /etc directories on multiple hosts to ensure that they are as close to identical as possible.
Since I am a non-root user I can't use something like Joey Hess' etc-keeper (announcement), although I can re-use the idea of storing the files from /etc in a Git repository, using metastore called from hook scripts to store the file meta-data which isn't normally stored by Git itself.
The current plan is to do something like (untested):
$ cd /path/to
$ export HOSTNAME=$(uname -n)
$ git clone git://repos/config
$ cd config
$ git branch $HOSTNAME
$ git checkout $HOSTNAME
$ rsync -a /etc .
$ git add etc
$ git commit -m"date '+%F %T'"
Then I should be able to run commands like:
$ git diff thishost thathost
to see the differences between the hostname branches.
I can also look at the HEAD commit on each branch on a daily basis to see what has changed.
We will see if this works when I implement it :-)
Since I am a non-root user I can't use something like Joey Hess' etc-keeper (announcement), although I can re-use the idea of storing the files from /etc in a Git repository, using metastore called from hook scripts to store the file meta-data which isn't normally stored by Git itself.
The current plan is to do something like (untested):
$ cd /path/to
$ export HOSTNAME=$(uname -n)
$ git clone git://repos/config
$ cd config
$ git branch $HOSTNAME
$ git checkout $HOSTNAME
$ rsync -a /etc .
$ git add etc
$ git commit -m"date '+%F %T'"
Then I should be able to run commands like:
$ git diff thishost thathost
to see the differences between the hostname branches.
I can also look at the HEAD commit on each branch on a daily basis to see what has changed.
We will see if this works when I implement it :-)
git fine-grained access control
If we are going to switch from CVS to Git we are going to need to implement the fine-grained access control features we have now. For example:
Junio Hamano and Carl Baldwin have an update-hook-example that describes how to implement an access control hook script, so we will base things on that. Their example assumes that the user has logged in using ssh so they can use username=$(id -u -n) but since we are coming in via the web we'd have to use the REMOTE_USER environment variable instead.
I think it makes sense to use a configuration file that is similar to the Gitosis config file, which people are familiar with. This is just an ini file that can be parsed using Config::IniFiles or something similar to Gitosis::Config, so this shouldn't be difficult.
Something else worth looking at is gerrit, which describes itself as follows:
- only certain users are allowed to commit to each branch.
- only certain paths can be committed to on each branch.
- only certain users are allowed to create tags in the central repository. (I'd like to get rid of this limitation, but it's there now. Perhaps only limit tag names that match a particular set of regular expressions.)
Junio Hamano and Carl Baldwin have an update-hook-example that describes how to implement an access control hook script, so we will base things on that. Their example assumes that the user has logged in using ssh so they can use username=$(id -u -n) but since we are coming in via the web we'd have to use the REMOTE_USER environment variable instead.
I think it makes sense to use a configuration file that is similar to the Gitosis config file, which people are familiar with. This is just an ini file that can be parsed using Config::IniFiles or something similar to Gitosis::Config, so this shouldn't be difficult.
Something else worth looking at is gerrit, which describes itself as follows:
Subscribe to:
Posts (Atom)