Submitted by Joseph Conway on Mon, 12/07/2010 - 03:58
When you pass large amounts of data to and from PL/R, quite a lot of time is needed for converting. It's better to directly store the data as R objects. I had been planning to continue with timeseries aggregation, but decided to take a side-road based on a recent question on the PL/R mailing list. The question was related to seismic data, which is in fact timeseries data. However, I guess the data is normally stored as an array of floats that are all recorded during some seismic event at a constant sampling rate.
Submitted by Joseph Conway on Fri, 09/07/2010 - 01:21
Frequently when dealing with parametric data, you need to "roll up" the data in summary fashion as it ages in order to reduce the volume kept on hand, or maybe because the summary statistics are what really interests you. There are several ways to do that, and this post highlights four different approaches. I was reminded of this kind of "roll ups" today by a question on the pgsql-novice list. This is actually quite a large topic, so I this tip will likely just scratch the surface. The question was related to storing min, max, and avg summaries on an hourly, daily, and weekly basis. The basic idea, for example, is that you can keep raw data for maybe a week, hourly summaries for 6 months, daily summaries for 3 years, and weekly summaries forever.
Submitted by Joseph Conway on Wed, 07/07/2010 - 22:34
Someone posted a dilemma to the pgsql-sql list today that involved many if not all of his sequences getting out of sync with their respective "serial" columns. In other words, something like "SELECT max(id) FROM sometable" yields 42, but the sequence nextval for sometable.id is currently set to 36. This is obviously bad (for reasons left as an exercise for the reader). So besides trying to figure out how the database ended up in this state, he needed a script to reset all of his sequences to the correct next value. I had run into a similar need not too long ago. Namely, when setting up multi-master replication with Bucardo you need your sequences to draw different values on either master so as not to conflict.
Submitted by Joseph Conway on Wed, 07/07/2010 - 02:10
I was given a Postgres database dump to analyze today created by "pg_dump -Fc". The source database included PostGIS 1.3.x extensions. I'm not sure if this is standard with PostGIS, but the related database objects were all dumped with a hard-coded library path, specifically /usr/lib/postgresql/8.3/lib. On my machine, I have many PostgreSQL clusters (essentially at least one for every supported branch dating back to 7.3.x), but they are not located under /usr/lib/postgresql. As such, I needed a quick fix. To wit:
Submitted by Martin Zobel-Helas on Mon, 07/06/2010 - 10:25
The text editor vim offers several tools for automation. This howto describes a way to auto-include text modules when creating new files. Often during programming or administration you need the same text modules again and again. The editor vim is very helpful here, as it can detect a file type while it is being created and insert pre-defined text modules accordingly. This behaviour can be configured in the file .vim/plugin/autoinsert.vim, for example with:
Submitted by Bernd Helmle on Tue, 25/05/2010 - 11:29
The PostgreSQL developers' community recently published the first Beta version of the new 9.0 release. Over 200 new functions and improvements feature in this new version. With this new release, PostgreSQL now amongst other features claims an inbuilt replication solution as well as the ability to access and read standby nodes, continuously being updated by Log Shipping (Hot Standby). Streaming replication allows the sending of transaction logs directly to one or more standby nodes, which considerably reduces the amount of time lost compared with the more common, file-based log shipping.
Submitted by Roland Wolters on Thu, 20/05/2010 - 11:40
Following our earlier introduction to RHCS we now present a real world example: the installation of RHCS with Debian to provide certain virtual machines as services. Our RHCS overview already explained the basics of RHCS. This time we will take two hosts with shared storage and provide KVM guests as services.
Installation of the nodes
In this setup the nodes are the machines which are running KVM. Each running KVM guest is a service managed by RHCS. While installing the KVM hosts you should make sure you comply with the following suggestions:
Submitted by Michael Banck on Wed, 05/05/2010 - 15:00
In May, Consultants from credativ GmbH will be holding a 3 day advanced system and network administration workshop at the Open Source School in Munich.
Training specifics (subject to modifications!):
Kerberos: This training covers the Kerberos authentification protocol, which can handle a range of services and operating systems transparently. The use of tickets makes single-sign-in possible; so a user can access all services with a unique log in. The training will be aimed at network and system administrators who wish to roll out Kerberos in their business or administrative network; it will also cover the installation and management of Kerberos, as well as the integration of services and client programs.
Submitted by Bernd Zeimetz on Mon, 12/04/2010 - 10:09
Lighttpd is a web server with a fast growing user base. This howto will demonstrate how redirects can be done based on the language of the user's browser. While migrating from our old blogging software to Movable Type we decided it would be a good idea to show the blog's welcome message in English or German depending on the language setting of the user's browser. Since one of the reasons for the switch to the new blog engine was that Movable Type creates static html pages, we avoided cgi scripts or similar workarounds.
Submitted by Bernd Helmle on Fri, 26/03/2010 - 13:57
The OOM-Killer can cause nasty surprises on machines with a heavy memory load; processes are cancelled or terminated without warning. Fortunately, this behaviour can be adjusted with some clever kernel tweaks. Administrators of Linux machines with a very high RAM-Usage are sometimes faced with a terrifying scenario: the Linux OOM-Killer (OOM = Out Of Memory). In situations such as a crashed PostgreSQL instance, the following entry can typically be found in the server log: