Assuming you already have Apache up and running, it’s really easy to enable mod_log_forensic in order to look in detail at the http(s) requests that are being made to your website.
This module provides detailed logs of every http request – and can be very useful to developers when debugging benign requests which are not processed correctly (and result in errors) – and also for looking in more detail at malicious requests that are intended to compromise the security of your website.
All you need to do is add the following to /etc/apache2/apache2.conf:
Then, as root (or using ‘sudo’ if appropriate for your setup) , run:
service apache2 restart
After enabling this module, detailed information about the http requests placed against your web site are are logged to /var/log/apache2/forensic_log.log
Some syslog log files produced contain ^M instead of a newline (or carriage return) at where a line ending would normally be.
Systems which use this pattern include the good old Cisco VCS – still my favourite SIP/H.323 call control platform – and also log files from exciting new video conferencing disruptor Pexip – easily my favourite MCU and imho far and away the “best of breed” of next generation video bridges- although I don’t pretend to be in any way unbiased here!
Whilst the use of ^M is actually very useful (as it allows you to easily use powerful command line tools like “grep” to find & filter relevant/interesting logs when troubleshooting) it can mean that reading the logs you’ve found/filtered that way can be a little bit difficult – particularly if the log contained several lines e.g. as a any SIP message would.
There are a number of ways of cleaning these logs up to make the logs easier for poor humans to read:
I appear to have something of a small ARM server habit – with four of them running at home right now.
I like the fact that running 4 of these servers consumes less power than even a single low-end Intel server would (although I often find myself wishing I had the extra grunt that an Intel box could bring)
Sheevaplug was my first foray into the world of ARM – and I loved it so much that when I stumbled across some Seagate Freeagent Dockstar servers (which are based on the same board as the sheevaplug) in the bargain bin at Clas Ohlson I bought them – and when one of them blew up a year later, I bought a pogoplug (also, at that time, based on the same hardware).
Being based on more or less the same Marvell hardware brought other benefits: it was possible to copy binaries from one system to another and they’d run happily.
My latest acquisition is a £25 Raspberry Pi unit. It’s running the new Debian wheezy raspbian Linux distro. Alas they switched architecture from armel to armhf between the (original?) Debian squeeze and (“new” in mid 2012) Debian wheezy (raspbian) distributions for the Raspberry Pi. Whilst the most basic “hello world” application (surprisingly!) can be copied from a Sheevaplug-type system to a Raspberry Pi, it’s less straightforward for nontrivial applications.
“./myprogram: error while loading shared libraries: unexpected PLT reloc type 0x12”
http://wiki.debian.org/ArmHardFloatPort discusses the differences between armel and armhf – crucially the ABI is different – hence the incompatibilities.
As an interim workaround, running the older Debian Squeeze distro might make life easier (and permit binary sharing in the short term). It also sounds like it may be possible to install the armel versions of the same libraries in parallel with the armhf versions on the same system (and presumably carefully manipulate LD_LIBRARY_PATH when running armel binaries) – but that sounds like a bit of a maintenance headache. Longer term I want to be running Raspbian and get the performance benefits of moving to armhf anyway.
The pragmatic answer (for my use case, anyway) is, therefore, to recompile.
The gmail blog describes a wonderful feature of gmail which allows you to use gmail to send e-mail from firstname.lastname@example.org (rather than email@example.com or firstname.lastname@example.org).
As somebody with their own domain, who has loads of carefully seperated different e-mail identities – all of which funnel incoming e-mails to a single gmail account – I sometimes find it necessary to reply from one of these addresses.
In this article we’ll discuss how it’s possible to arrange for this to happen.
In this article we’ll extract information from an MS Excel spreadsheet using python’s xlrd library.
Henrik Kniberg has an excellent Microsoft Excel spreadsheet for tracking Agile/SCRUM product backlogs.
On Windows platforms, there’s even a button to generate printable story index cards from the backlog items.
However as a Mac (MS office for Mac) user and Linux (OpenOffice/LibreOffice) user I found that the macros simply didn’t work for me, even after a bit of hacking. OpenOffice’s VBA implementation has some interesting incompatibilities with the Microsoft equivalent – and Microsoft, in their infinite wisdom, have removed the VBA support from MS Excel for Mac – and so are pushing me firmly towards an OSS solution.
I also faced an additional challenge: As well as index cards, I also the need to generate a readable PDF version of the document (using data read from the spreadsheet) and the ability to generate a mediawiki page (again using data read from the spreadsheet).
I want a solution that will work both on Linux (ubuntu) and Mac OSX (though the solution I came up with probably ought to work on Windows too).
/---> index cards
\---> mediawiki page
\---> PDF document
To achieve the desired functionality and portability, I elected to use my favourite scripting language, python and the excellent xlrd module for reading Microsoft Excel spreadsheets.
Many workplaces have mediawiki wikis – and these are often the information hub for a team or, indeed, the entire organisation.
Rather than generating a new web page on a different web server to display a semi-regularly updated computer generated report, I thought (perhaps) it could be convenient to have a tool which permitted the easy replacement of a particular page.
I recently undertook to develop such a tool – and, in researching, discovered the excellent mwclient library – although it _is_ somewhat lacking in examples/documentation.
As luck had it, the reporting tool I was using already generated output in mediawiki markup on stdout – so all I needed to do was write a tool that could read a page contents from stdin and then upload it to a predefined page on wiki.