Index of categories

Pages about Debian.

The smell of email

This was written in response to a message with a list of demotivating behaviours in email interactions, like fingerpointing, aggressiveness, resistance when being called out for misbehaving, public humiliation for mistakes, and so on

There are times when I stumble on an instance of the set of things that were mentioned, and I think "ok, today I feel like doing some paid work rather than working on Debian".

If another day I wake up deciding to enjoy working on Debian, which I greatly do, I try and make sure that I can focus on bits of Debian where I don't stumble on any instances of the set of things that were mentioned.

Then I stumble on Gregor's GDAC and I feel like I'd happily lose one day of pay right now, and have fun with Debian.

I feel like Debian is this big open kitchen populated by a lot of people:

  • some dump shit
  • some poke the shit with a stick, contributing to the spread of the smell
  • some carefully clean up the shit, which in the short term still contributes to the smell, but makes things better in the long term
  • some prepare and cook, making a nice smell of food and NOMs
  • some try out the food and tell us how good it was

I have fun cooking and tring out the food. I have fun being around people who cook and try out the food.

The fun in the kitchen seems to be correlated to several things, one of which is that it seems to be inversely proportional to the stink.

I find this metaphore interesting, and I will start thinking about the smell of a mailing list post. I expect it should put posts into perspective, I expect I will develop an instinct for it, so that I won't give a stinky post the same importance of a post that smells of food.

I also expect that the more I learn to tell the smell of food from the smell of shit, the more I can help cleaning it, and the more I can help telling people who repeatedly contribute to the stink to please try cooking instead, or failing that, just try and stay out of the kitchen.

Posted Fri Dec 5 11:51:49 2014 Tags:

Fun and Sanity in Debian

A friend of mine recently asked: "is there anything happening in Debian besides systemd?"

Of course there is. He asked it 2 days after the freeze, which happened in time, and with an amazingly low RC bug count.

The most visible thing right now seems to be this endless init system argument, but there are fun and sane things in Debian. Many of them.

I think someone should put the spotlight on them, and here's my attempt.

Yesterday I set up a gobby document asking "What is now happening in Debian that is exciting, fun and sane?", and passed the link around the Cambridge Miniconf and some IRC channels.

Here are a few quotations that I collected:

The armhf and arm64 ports have for me been wonderful and exciting, and were a great time for me to start getting involved. (Jon "Aardvark" Ward)

We have a way of tracking random contributors, and as far as I know no other project has anything like it. (Enrico Zini)

codesearch.debian.net is an incredibly important resource, not just for us but for the free software community at large. (Ben Hutchings)

sources.debian.net is a very useful resource with lots of interested contributors, it received 10 OPW applicants (Stefano Zacchiroli)

It has never been easier to work on new infrastructure project thanks to the awesome work of the DSA team. We have dozens of contribution opportunities outside of just plain packaging. (Raphaël Hertzog)

The work on reproducible builds has achieved excellent results with 61.3% of packages being reproducible. (Paul Wise)

Porting arm64 has been (peversely) great fun. It's remarkably morish and I like nothing more than a tedious argument with autoconf macros. Working with lots of enthusiastic people from other teams, helping getting the port set up and build has been great - thank you everybody. (Wookey)

And here are random exciting things that were listed:

  • build-profile support (for bootstrapping) is all in jessie (dpkg, apt, sbuild, python-apt, debhelper, libconfig-model-dpkg-perl, lintian).
  • PointCloudLibrary (PCL) got migrated from Ubuntu to Debian
  • Long Term Support has arrived!
  • http://ci.debian.net
  • Debian is participating for the second time in OPW as mentor orga
  • ftp-master is getting an API
  • cross-toolchains for jessie are available
  • arm64/ppc64el ready to go into jessie
  • wheezy-backports is more useful and used than ever
  • we froze, in time, with a remarkably low RC bug count, and we have a concrete plan for getting from that to a release
Posted Sun Nov 9 16:10:48 2014 Tags:

cryptsetup password and parallel boot

Since parallel boot happened, during boot the cryptsetup password prompt in my system gets flooded with other boot messages.

I fixed it, as suggested in #764555, installing plymouth, then editing /etc/default/grub to add splash to GRUB_CMDLINE_LINUX_DEFAULT:

GRUB_CMDLINE_LINUX_DEFAULT="quiet splash"

Besides showing pretty pictures (and most importantly, getting them out of my way if I press ESC), plymouth also provides a user prompt that works with parallel boot which sounds like what I needed.

Posted Fri Oct 24 10:20:22 2014 Tags:

Alternate rescue boot entry with systemd

Since systemd version 215, adding systemd.debug-shell to the kernel command line activates the debug shell on tty9 alongside the normal boot. I like the idea of that, and I'd like to have it in my standard 'rescue' entry in my grub menu.

Unfortunately, by default update-grub does not allow to customize the rescue menu entry options. I have just filed #766530 hoping for that to change.

After testing the patch I proposed for /etc/grub.d/10_linux, I now have this in my /etc/default/grub, with some satisfaction:

GRUB_CMDLINE_LINUX_RECOVERY="systemd.log_target=kmsg systemd.log_level=debug systemd.debug-shell"

Further information:

Thanks to sjoerd and uau on #debian-systemd for their help.

Posted Thu Oct 23 22:06:30 2014 Tags:

Pressure

I've just stumbled on this bit that seems relevant to me:

Insist on using objective criteria

The final step is to use mutually agreed and objective criteria for evaluating the candidate solutions. During this stage they encourage openness and surrender to principle not pressure.

http://www.wikisummaries.org/Getting_to_Yes

I find the concept of "pressure" very relevant, and I like the idea of discussions being guided by content rather than pressure.

I'm exploring the idea of filing under this concept of "pressure" most of the things described in code of conducts, and I'm toying with looking at gender or race issues from the point of view of making people surrender to pressure.

In that context, most code of conducts seem to be giving a partial definition of "pressure". I've been uncomfortable at DebConf this year, because the conference PG12 code of conduct would cause me trouble for talking about what lessons can Debian learn from consent culture in BDSM communities, but it would still allow situations in which people would have to yield to pressure, as long as the pressure was done avoiding the behaviours blacklisted by the CoC.

Pressure could be the phrase "you are wrong" without further explanation, spoken by someone with more reputation than I have in a project. It could be someone with the time for writing ten emails a day discussing with someone with barely the time to write one. It could be someone using elaborate English discussing with someone who needs to look up every other word in a dictionary. It could be just ignoring emails from people who have issues different than mine.

I like the idea of having "please do not use pressure to bring your issues forward" written somewhere, rather than spend time blacklisting all possible ways of pressuring people.

I love how the Diversity Statement is elegantly getting all this where it says: «We welcome contributions from everyone as long as they interact constructively with our community.»

However, I also find it hard not to fall back to using pressure, even just for self-preservation: I have often found myself in the situation of having the responsibility to get a job done, and not having the time or emotional resources to even read the emails I get about the subject. All my life I've seen people in such a situation yell "shut up and let me work!", and I feel a burning thirst for other kinds of role models.

A CoC saying "do not use pressure" would not help me much here, but being around people who do that, learning to notice when and how they do it, and knowing that I could learn from them, that certainly would.

If you can link to examples, I'd like to add them here.

Posted Tue Sep 23 16:18:22 2014 Tags:

Laptop, I demand that you suspend!

Dear Lazyweb,

Sometimes some application prevents suspend on my laptop. I want to disable that feature: how?

I understand that there may exist some people who like that feature. I, on the other hand, consider a scenario like this inconceivable:

  1. I'm on a plane working with my laptop, the captain announces preparations for landing, so I quickly hit the suspend button (or close the lid) on my laptop and stow it away.
  2. One connecting flight later, I pick up my backpack, I feel it unusually hot and realise that my laptop has been on all along, and is now dead from either running out of battery or thermal protection.
  3. I think things that, if spoken aloud in front of a pentacle, might invoke major lovecraftian horrors.

I do not want this scenario to ever be possible. I want my suspend button to suspend the laptop no matter what. If a process does not agree, I'm fine with suspending it anyway, or killing it.

If I want my laptop to suspend, I generally have a good enough real-world reason for it, and I cannot conceive that a software could ever be allowed to override my command.

How do I change this? I don't know if I should look into systemd, upowerd, pm-utils, the kernel, the display manager or something else entirely. I worry that I cannot even figure where to start looking for a solution.

This happened to me multiple times already, and I consider it ridiculous. I know that it can cause me data loss. I know that it can cause me serious trouble in case I was relying on having some battery or state left at my arrival. I know that depending on what is in my backpack, this could also be physically dangerous.

So, what knob do I tweak for this? How do I make suspend reliable?

Update

Systemd has an inhibitor system, and systemd-inhibit --list only lists 'delay' blocks in my system. It is an interesting feature that seems to be implemented in the right way, and it could mean that I finally can get my screen to be locked before the system is suspended.

It is possible to configure the inhibitor system in /etc/systemd/logind.conf, including ways to ignore inhibitors, and a maximum time after which inhibitors are ignored if not yet released.

Try as I might to run everything that I was running on the plane that time, I could not manage to see anything take an inhibitor block that could have prevented my suspend. I now suspect that what happened to me was a glitch caused by something else (hardware? kernel? cosmic rays!) during that specific suspend.

When I had this issue in the past it looks like the infrastructure at the time was far more primitive that what we have now with systemd, so I guess that when writing my blog post I had simply correlated my old experiences with a one-off suspend glitch.

If I want to investigate or tune further, to test the situation with a runaway block, I can use commands like systemd-inhibit --mode=block sleep 3600.

I'm quite happy to see that we're moving to a standard and sane system for this. In the meantime, I have learnt that pm-utils has now become superfluous and can be deinstalled, and so can acpi-support and acpi-support-base.

Thanks vbernat, mbiebl, and ah, on #debian-devel for all the help.

Posted Thu Sep 11 14:32:40 2014 Tags:

Wheezy for industrial software development

I'm helping with setting up a wheezy-based toolchain for industrial automation.

The basic requirements are: live-build, C++11, Qt 5.3, and a frozen internal wheezy mirror.

debmirror

A good part of a day's work was lost because of #749734 and possibly #628779. Mirror rebuild is still ongoing, and fingers crossed.

This is Italy, and you can't simply download 21Gb of debs just to see how it goes.

C++11

Stable toolchains for C++11 now exist and have gained fast adoption. It makes sense, since given what is in C++11 it is unthinkable to start a new C++ project with the old standard nowadays.

C++11 is supported by g++ 4.8+ or clang 3.3+. None of them is available on wheezy or wheezy-backports.

Backports exist of g++ 4.8 only for Ubuntu 12.04, but they are uninstallable on wheezy due at least to a different libc6. I tried rebuilding g++4.8 on wheezy but quickly gave up.

clang 3.3 has a build dependency on g++ 4.8. LOL.

However, LLVM provides an APT repository with their most recent compiler, and it works, too. C++11 problem solved!

Qt 5.3

Qt 5.3 is needed because of the range of platforms it can target. There is no wheezy backport that I can find.

I cannot simply get it from Qt's Download page and install it, since we need it packaged, to build live ISOs with it.

I'm attempting to backport the packages from experimental to wheezy.

Here are its build dependencies:

libxcb-1.10 (needed by qt5)

Building this is reasonably straightforward.

libxkbcommon 0.4.0 (needed by qt5)

The version from jessie builds fine on wheezy, provided you remove --fail-missing from the dh_install invocation.

libicu 52.1 (needed by harfbuzz)

The jessie packages build on wheezy, provided that mentions of clang are deleted from source/configure.ac, since it fails to build with clang 3.5 (the one currently available for wheezy on llvm.org).

libharfbuzz-dev

Backporting this is a bloodbath: the Debian packages from jessie depend on a forest of gobject hipsterisms of doom, all unavailable on wheezy. I gave up.

qt 5.3

qtbase-opensource-src-5.3.0+dfsg can be made to build with an embedded version of harfbuzz, with just this change:

diff -Naur a/debian/control a/debian/control
--- a/debian/control    2014-05-20 18:48:27.000000000 +0200
+++ b/debian/control    2014-05-29 17:45:31.037215786 +0200
@@ -28,7 +28,6 @@
                libgstreamer-plugins-base0.10-dev,
                libgstreamer0.10-dev,
                libgtk2.0-dev,
-               libharfbuzz-dev,
                libicu-dev,
                libjpeg-dev,
                libmysqlclient-dev,
diff -Naur a/debian/rules b/debian/rules
--- a/debian/rules  2014-05-18 01:56:37.000000000 +0200
+++ b/debian/rules  2014-05-29 17:45:25.738634371 +0200
@@ -108,7 +108,6 @@
                -plugin-sql-tds \
                -system-sqlite \
                -platform $(platform_arg) \
-               -system-harfbuzz \
                -system-zlib \
                -system-libpng \
                -system-libjpeg \

(thanks Lisandro Damián Nicanor Pérez Meyer for helping me there!)

There are probably going to be further steps in the Qt5 toolchain.

Actually, let's try prebuilt binaries

The next day with a fresh mind we realised that it is preferable to reduce our tampering with the original wheezy to a minimum. Our current plan is to use wheezy's original Qt and Qt-using packages, and use Qt's prebuilt binaries in /opt for all our custom software.

We run Qt's installer, tarred the result, and wrapped it in a Debian package like this:

$ cat debian/rules
#!/usr/bin/make -f

QT_VERSION = 5.3

%:
    dh $@

override_dh_auto_build:
    dh_auto_build
    sed -re 's/@QT_VERSION@/$(QT_VERSION)/g' debian-rules.inc.in > debian-rules.inc

override_dh_auto_install:
    dh_auto_install
    # Download and untar the prebuild Qt5 binaries
    install -d -o root -g root -m 0755 debian/our-qt5-sdk/opt/Qt
    curl http://localserver/Qt$(QT_VERSION).tar.xz | xz -d | tar -C debian/our-qt5-sdk/opt -xf -
    # Move the runtime part to our-qt5
    install -d -o root -g root -m 0755 debian/our-qt5/opt/Qt
    mv debian/our-qt5-sdk/opt/Qt/$(QT_VERSION) debian/our-qt5/opt/Qt/
    # Makes dpkg-shlibdeps work on packages built with Qt from /opt
    # Hack. Don't try this at home. Don't ever do this unless you
    # know what you are doing. This voids your warranty. If you
    # know what you are doing, you won't do this.
    find debian/our-qt5/opt/Qt/$(QT_VERSION)/gcc_64/lib -maxdepth 1 -type f -name "lib*.so*" \
        | sed -re 's,^.+/(lib[^.]+)\.so.+$$,\1 5 our-qt5 (>= $(QT_VERSION)),' > debian/our-qt5.shlibs


$ cat debian-rules.inc.in
export PATH := /opt/Qt/@QT_VERSION@/gcc_64/bin:$(PATH)
export QMAKESPEC=/opt/Qt/@QT_VERSION@/gcc_64/mkspecs/linux-clang/

To build one of our packages using Qt5.3 and clang, we just add this to its debian/rules:

include /usr/share/our-qt5/debian-rules.inc

Wrap up

We got the dependencies sorted. Hopefully the mirror will rebuild itself tonight and tomorrow we can resume working on our custom live system.

Posted Thu May 29 18:05:17 2014 Tags:

On responsibilities

I feel like in my Debian projects I have two roles: the person with the responsibility of making the project happen, and the person who does the work to make it happen.

As the person responsible for the project, I need to keep track of vision, goals, milestones, status. To make announcements, find contributors, motivate them, deal with users and bug reports, maintain documentation, digest feedback.

As the person who does the work to make it happen, I need quiet time, I need to study technology, design code, write unit tests, merge patches, code, code, code, ask around about deployment information, more code.

I have a hard time doing both things at the same time: the first engages my social skills and extroversion, requires low-latency interaction, and acting when outside things happen. The second engages my technical skills and introversion, requires quiet uninterrupted periods of flow, and acting when inspiration strikes. I never managed to make good use of "gift bugs" or "minions": I often found the phrase "it's easier for me to do than to explain it" sadly relevant. Now I understand that it's not because of the objective difficulty of explaining or doing things, nor about the value of doing or of involving people. It's about switching from one kind of workflow to another. If I rephrase that as "it's easier for me to stay in flux and fix it, than to switch my entire attitude to ask for help".

Of course this does not scale: we've all been saying it since I can remember.

Looking at the situation from the point of view of those two roles, however, I now wonder if those two roles shouldn't really require two people. In other worlds they are: the project managers, taking responsibility for making the project happen, and the software designers, artists, and all other kind of artisans doing the work to make it happen.

Of course I don't want the kind of project manager that shifts responsibilities to artisans, does nothing and takes the credit for the project: not in paid work, not in Debian.

Project management is something else.

I would be interested instead in having the kind of project manager that takes responsibility for the project, checks how the artisans are doing and communicates what is happening to the rest of the world, deals with the community, motivates more people to help, test, try, use, give feedback on things as they happen. A project manager / community manager.

So that while I'm flux there is someone who tags bugs as "gift", mentors people to find code and documentation, and remembers to write an announcement if I implemented three cool things in a row and I'm already busy working on the fourth.

So that I don't write cool ideas in my todo list where nobody can read them, but I can share them to a mailing list where someone picks up a relevant one and finds someone to make it happen while I'm busy refactoring old code that only I can understand.

So that if I say "sorry, paid work calls, I won't be able to work on this project for a month", I'll be able to completely forget about that project for a whole month, without leaving the community out there to die.

That's an interesting job for non-uploading DDs: please take over my projects. Let's share a vision, and team up to make it happen. Give me the freedom of being the craftsman I enjoy being, and take away from me those responsibilities that I've never asked for.

The worst project managers are those that never asked to be one, but were promoted to it. Let's not repeat that mistake in Debian.

A good part of the credits for this post go to Francesca Ciceri, for the discussions we had on our way back from MiniDebConf Barcelona 2014.

P.S. I'm seeing how a non-uploading DD could be in the Maintainer field for one or more packages, with uploading DDs being, well, uploaders. Food for thought.

Posted Tue Mar 18 16:58:58 2014 Tags:

An absolute truth

Every time people phrase their own opinions as absolute truths, they look grotesque and they incite violence.

If you now feel like stabbing me, then you may be seeing my point.

Posted Tue Feb 11 14:33:18 2014 Tags:

Debops

What I like the most about being a Developer is building tools to (hopefully) make someone's life better.

I like it when my software gets used, and people thank me for it, because there was a need they had that wasn't met before, and thanks to my software now it is being met.

I am maintaining software for meteorological research that is soon going to be 10 years old, and is still evolving and getting Real Work done.

I like to develop software as if it is going to become a part of human cultural heritage, developing beyond my capacity, eventually surviving me, allowing society to declare that the need, small as it was, is now met, and move on to worry about some other problem.

I feel that if I'm not thinking of my software in that way, then I am not being serious. Then I am not developing something fit for other people to use and rely on.

This involves Development as much as it involves Operations: tracking security updates for all the components that make up a system. Testing. Quality assurance. Scalability. Stability. Hardening. Monitoring. Maintenance requirements. Deployment and upgrade workflows. Security.

I came to learn that the requirements put forward by sysadmins are to be taken seriously, because they are the ones whose phone will ring in the middle of the night when your software breaks.

I am also involved in more than one software project. I am responsible for about a dozen web applications deployed out there in the wild, and possibly another dozen of non-web projects, from terabyte-sized specialised archival tools to little utilities that are essential links in someone's complex toolchain.

I build my software targeting Debian Stable + Backports. At FOSDEM I noticed that some people consider it uncool. I was perplexed.

It provides me with a vast and reasonably recent set of parts to use to build my systems.

It provides me with a single bug tracking system for all of them, and tools to track known issues in the systems I deployed.

It provides me with a stable platform, with a well documented upgrade path to the next version.

It gives me a release rhythm that allows me to enjoy the sweet hum of spinning fans thinking about my next mischief, instead of spending my waking time chasing configuration file changes and API changes deep down in my dependency chain.

It allows me to rely on Debian for security updates, so I don't have to track upstream activity for each one of the building blocks of the systems I deploy.

It allows me not to worry about a lot of obscure domain specific integration issues. Coinstallability of libraries with different ABI versions. Flawless support for different versions of Python, or Lua, or for different versions of C++ compilers.

It has often happened to me to hear someone rant about a frustrating situation, wonder how come it had never happened to me, and realise that someone in Debian, who happens to be more expert than I can possibly be, had thought hard about how to deal with that issue, years before.

I know I cannot be an expert of the entire stack from bare iron all the way up, and I have learnt to stand on the shoulders of giants.

'Devops' makes sense for me in that it hints at this cooperation between developers and operators, having constructive communication, knowing that each side has their own needs, trying their best to meet them all.

It hints at a perfect world where developers and operators finally come to understand and trust each other's judgement.

I don't know that perfect world, but I, a developer, do like to try to understand and trust the judgement of sysadmins.

I sympathise with my sysadmin friends who feel that devops is turning into a trend of developers thinking they can do without sysadmins. Reinventing package managers. Bundling dependencies. Building "apps" instead of components.

I wish that people who deploy a system built on such premises, have it become so successful that they end up being paid to maintain them for their whole career. That is certainly what I wish and strive for, for me and my own projects.

In my experience, a sustainable and maintainable system won't come out of the startup mindset of building something quick&dirty, then sell it and move on to something else.

In my experience, the basis for having sustainable and maintainable systems have been well known and tested in Debian, and several other distributions, for over two decades.

At FOSDEM, we thought that we need a name for such a mindset.

Between beers, that name came to be "debops". (It's not just Debian, though: many other distributions get it right, too)

Posted Tue Feb 4 18:52:39 2014 Tags:
Posted Sat Jun 6 00:57:39 2009

Taiwan.

Amis and Paiwan input method and character set

Arne Götje (高盛華) created:

The scripts, especially Amis, make heavy use of Unicode combination characters. They should display well at least with the Dejavu Sans font in many applications.

Try it out: if it displays correctly, you should see:

  • accented letters instead of letters next to accents.
  • i with both the dot and the accent.

Update: there is now a wiki page on the Debian wiki.

Posted Sat Jun 6 00:57:39 2009 Tags:

Happy new year

A year ago we got in touch with various Taiwanese aboriginal tribes to try to start localisation efforts.

Thanks to the research the Taroko people did during 2007 and the prototype work of tonight, the Taroko people in Taiwan can see the computer calendar of the new year in their own language:

trv_TZW Gnome calendar

Posted Sat Jun 6 00:57:39 2009 Tags:

Character list for the Amis language

We mapped the available glyphs and accents for the Amis language.

The letters in alphabetical order:

    a c d f ng h i k l m n o p r s t u w y

Everyone of them can get an acute or circumflex accent on top. ng can get a dot on top of the g.

The accents are literally on top: i would get the dot PLUS the accent on top.

Not all accented characters directly exist in Unicode; however Unicode developed various kinds of combination features to take care of these cases.

Then we need an input method that would insert ng instead of g and allow to type all the accent combinations.

Here is the full character set:

    a     á    â
    c     ć    ĉ
    d     d́    d̂
    f     f́    f̂
    ng    nǵ   nĝ  nġ
    h     h́    ĥ
    i     i̇́    i̇̂
    k     ḱ    k̂
    l     ĺ    l̂
    m     ḿ    m̂
    n     ń    n̂
    o     ó    ô
    p     ṕ    p̂
    r     ŕ    r̂
    s     ś    ŝ
    t     t́    t̂
    u     ú    û
    w     ẃ    ŵ
    y     ý    ŷ

Update: this character list has been improved and the good version is found in the Debian wiki.

The list is not displayed correctly with many fonts or rendering engines. Arne made a test page that explicitly sets a font that works.

The accents are not taken into account when sorting.

Uppercase letters are not used.

Note: the page has been updated to reflect further input from Unicode and Amis people.

Update: there is now a wiki page on the Debian wiki.

Posted Sat Jun 6 00:57:39 2009 Tags:

Glitches in the Matrix

Taiwanese license plate over German license plate with K nation code

Korean car with Taiwanese license plate (edited to anonymise it) over EU license plate with (Portuguese??) numbers on the right, and Korea as country code.

'Italian' pasta sold by British supermarked in Taiwan

Italian pasta sold by a British supermarket, in Taiwan.

Also, "Messicani" is not a kind of Italian pasta. Google for it, and you'll only find it mentioned in British websites.

Posted Sat Jun 6 00:57:39 2009 Tags:

OpenStreetMap party at Kaohsiung, Taiwan

Apparently, yesterday we had the first OpenStreetMap event in Taiwan!

We met in a café/restaurant equipped with power plug, wireless network and overhead projector and we had a bit of an introduction, chat and lunch.

Then we split in groups and exploited the fact that the newly built underground (KMRT) system is still free of charge, to spread around and map around the stations.

Finally, we reconvened at someone's house to see how to put the data together, draw roads, tag and upload.

Highlights of the day:

Technical bits:

  • People with the eeePC The eeePC was very popular, and very handy for going around storing tracks, as you can just chuck it in one bag. JOSM runs fine, although it could really use an interface redesign to fit in the small screen. In fact, it could really use an interface redesign to fit in the standard 1024x768 screen of my laptop.
  • We could not use the tracks made with the Garmins because we did not know we had to do "Setup -> Map -> Lock On Road = Off" and it was on by default. Now we know it for next time.
  • Something like a SirfStarIII really helps in a city made mainly of very tall buildings with lots of steel and glass. My Sony-based cheap gps receiver that worked ok in the Bolognese countryside was next to useless here, continously losing the fix and producing a crazy zigzagging track of doom, only useful to figure out big long straight roads.
  • Geocorrelation of digital camera pictures rocks! Who needs to store waypoints when you can just take pictures with the digital camera and have them show up as waypoints in JOSM? The trick of taking a picture of the GPS time and use that to compute time offset is great. Also, we found it easier to just fire up gpscorrelate to do the geocorrelation rather than figuring out how the tools in JOSM work.

Issues to address:

  • There is a strong need for a zh_TW translation plugin of JOSM; I'll try to find out how to do it and pass on the information to who can do it.
  • Road names could be written either in English or in Chinese characters. Currently English has been used for the name tag because osmarender cannot render Chinese characters. There is some planining to create an OSM mirror in Taiwan which renders twice, and allows to choose the rendering language for the map. I will try to get a planet.osm extract for Taiwan that people can use to experiment with this; thanks to people in #osm for giving me names of people to contact. I will try later after Europe wakes up from this even-earlier-than-usual sunday morning.
Posted Sat Jun 6 00:57:39 2009 Tags:

How to freak out a Frenchperson

Bottles with 'Champagne Soda' label
'Champagne Soda' label close-up 'Champagne Soda' label close-up
'Champagne Soda' label close-up 'Champagne Soda' label close-up

The way to freak out an Italian, instead, is to show them a bottle of "Lambrini" in the UK.

Posted Sat Jun 6 00:57:39 2009 Tags:

Meet the EeePC

Being in Taiwan, we swiftly got hold of an Eeepc.

Instead of installing Debian into it, we decided to keep the original system and see how it works. It's a Debian derivative, and the feeling inside a terminal window is quite familiar.

The boot is very fast. Two seconds after the video bios quickly shows on the screen, the X cursor appears. It's definitely worth having a look at how this devil boots.

The "Asus Launcher" is worth a look. IMHO it's nicer and more useful than the usual launcher menu that we get in Gnome or KDE 3, although it probably only makes sense on a small display. It replaces the desktop background, has tabs, no clutter and allows to launch applications. Turns out it's customisable as well.

What's on the system

KDE 3.4.2, with some applications renamed so that their names are more human. For example, konsole became console.

vim! \o/ But not emacs :)

mc! Someone out there wanted to make my life easier.

fbreader. I had never heard of it, but it's a very good discovery that I've now started to use it on my laptop as well.

Little howtos

To get to a terminal, hit Ctrl+T in the file manager, or Ctrl+Alt+T elsewhere.

The root password is the same as the user password.

To change the system language, I managed with a simple dpkg-reconfigure locales.

Ways they simplified the unix system

It's single user: I didn't find a way to create multiple users besides the terminal, and the login program does not ask for a username, only for the password.

The "win" key has a house painted on it, and it's used as "hide/show all applications" key. When all applications are minimised, the Asus launcher is visible instead of the X background: this behaviour basically turns the key into a sort of "run application" key. The key still works as a kind of shift, although it probably was not intended to.

The repository management is interesting. /etc/apt/sources.list contains:

deb http://update.eeepc.asus.com/p701 p701 main
deb http://update.eeepc.asus.com/p701/tw p701 main

which means they have a repository per eepc model and a subrepository per localised version.

The "Internet" group of applications has a Wikipedia toplevel application: it's nice to see the ecosystem of free software / free culture coming together to provide a nice user experience.

An extra link to the SD card mount point (besides the one in /mount) appears in the home directory automatically when the SD card is inserted. This means that when you do "save as" from all sorts of applications, the SD card is there, easy to reach. This helps if one decides not to use the internal flash for data, and just save everything in the SD card: I like doing this, as it allows me to quickly move the SD card with all the data between the EeePC and other computers.

Changes I made so far

Activate en_GB.UTF-8 via dpkg-reconfigure locales.

Add en_GB.UTF-8 to /etc/scim/global, to get SCIM input methods to work.

Little flaws

Virtual screens are enabled, so W+arrow switches virtual screen. The feeling you get if you hit W+arrow is that all your applications disappeared. This could be improved by having the vm keep the asus launcher at the bottom of the current virtual screen, instead of just at the bottom of the first screen. Or, to disable virtual screens by default.

It is possible to drag the lower panel around, maybe accidentally: that's another of our fancy default "features" that should be disabled by default.

It is also possible to remove applets from the applet bar by mistake: for example I wanted to disconenct the wireless, and I instead ended up quitting the wireless applet. Luckily, the next time I started the computer it magically came back.

~/.xsession-errors is continuously getting the useless stdout/stderr debugging flood of GUI apps. Noone bothers usually, except that in this case the file is on flash, where unneeded writes are also very much unwanted. I'm considering symlinking it to /dev/null, but ideally we should get GUI apps to only write out what is really important.

Battery charging doesn't show how long it is going to take until the battery is fully charged.

No capslock or numlock leds. This probably calls for disabling or remapping of capslock. Numlock is very hard to hit by accident, but capslock is.

Random thoughts

If you buy an eeepc, I really suggest you think of it a mass consumption appliance and stay on the original OS for a while. Most of what's in here is what we use everyday, just on a different context. Try to use it as an appliance and see if it is perfect, and if it isn't, try to find out what is missing. It is a fantastic way to find out important bits that are missing in Debian as well.

Also, if you're used to tailoring everything to yourself before starting to use a Linux system, this is a great way to try the usage experience that we can offer by default. The Firefox welcome page the first time you connect, for example, is surprisingly nice. Everything we know as doable comes a bit as as a surprise because this time someone has done it for us.

I wish that that someone can be invited to talk at the next Debconf: the possibility of having a look at the work that has been done in bending Debian to this nice little device is to me one of the most valuable things so far about the eeepc.

Help/About KDE/Credits

It's reachable by most applications, and says:

The development team would like to thank the following people and organizations for their contributions:

  • the Debian Project,
  • the GNU Project,
  • the KDE Project,
  • the Mozilla Project,
  • the OpenOffice.org Project,
  • the SAMBA Project,
  • the X.Org Foundation,

Linus Torvalds and the other Linux kernel developers, and Free software developers around the world.

I'm using an appliance that is thanking me, and others like me: priceless!

Posted Sat Jun 6 00:57:39 2009 Tags:

Creating a new locale

I'm currently in Cilamitay, in the east of Taiwan. There is a little meeting of Taiwanese Free Software people and people from the Amis, Taroko and Puyuma tribes, with the idea of starting localisation efforts for some aboriginal languages.

These are some of the issues we are going to discuss:

Language code

A new ISO standard (639-3) will hopefully be formalised in January that will include the language codes for the Taiwanese aboriginal tribes. We'll have to work some temporary solution, but there's good hope that it won't have to be temporary for long.

List of characters

Because of Christian missionary influence, both Amis and Taroko use a roman alphabet, with accents. We need to work out the complete list of character and accent combination, see if everything is in Unicode, see how they sort.

We then need to find a comfortable way to input them using the keyboards normally available here (English US layout): compose key? Dead keys? How about on Windows?

Womble2 on IRC tells me that on Windows one can works with MSKLC.

Technical terms and country list

We need to work out how to map terms that do not exist in the language.

Technical terms are usually borrowed from Japanese.

Names for all the countries in the world probably do not exist.

Translation interface

We need to find an easy to use interface to input the translations.

There is Rosetta.

There is Pootle. (Thanks to Christian Perrier for pointing me at it)

There is Webpot.

Update: there is now a wiki page on the Debian wiki.

Posted Sat Jun 6 00:57:39 2009 Tags:

Character list for the Paiwan language

We mapped the available glyphs and accents for the Paiwan language.

The letters in alphabetical order:

a b c d e f h i j k l m n p q r s t u v w y z ḏ nġ ḻ ṟ ṯ 

No uppercase.

Update: this character list has been improved and the good version is found in the Debian wiki.

All the characters are in Unicode except nġ, which already needs to be requested for the Amis script.

We need to design an input method to enter the underlined letters and the nġ.

Update: there is now a wiki page on the Debian wiki.

Posted Sat Jun 6 00:57:39 2009 Tags:
Posted Sat Jun 6 00:57:39 2009

Food and recipes.

Coppone e spinaci all'orientale

Ingredienti:

  • una bistecca di coppone
  • spinaci surgelati
  • olio
  • aglio
  • zenzero
  • peperoncino
  • anice stellato
  • salsa di soia
  • olio di sesamo tostato
  • pepe

Al supermercato hanno spesso delle bistecchine di coppone in sconto. Sono ottime sulla griglia, ma in mancanza di griglia una volta ho improvvisato questo, e ogni tanto lo rifaccio. È una cena velocissima che si può preparare quando non c'è niente in casa con ingredienti presi fuori dal freezer.

Soffriggere nell'olio l'aglio, lo zenzero e l'anice stellato.

Aggiungere il coppone tagliato a pezzetti e farlo rosolare. Mentre sta cuocendo, aggiungere il peperoncino sbriciolato, l'olio di sesamo e un po' di salsa di soia.

Quando la carne ha preso colore, aggiungere gli spinaci scongelati e rosolarli assieme alla carne e al suo sugo.

Regolare di sale con la salsa di soia e spolverare di pepe macinato prima di servire.

Posted Sat Mar 9 18:42:22 2013 Tags:

Besciamella al caffè e acciughe

Ingredienti:

Ispirato da un soufflé di broccoli al caffè mangiato alla meravigliosa trattoria Antichi Sapori a Parma, ho provato anch'io a combinare caffè e broccoli.

L'idea era fare una salsa da versare sui broccoli appena lessati. Lo chef Davide Sensi aveva parlato di caffè e acciughe, quindi ho deciso che il sapore della salsa dovrà venire da caffè e salsa di pesce thai. Per addensarli, ci potrebbe stare anche solo un classico roux.

Il risultato, una besciamella in cui il roux non è allungato con latte, ma con caffè, salsa di pesce thai e acqua di cottura dei broccoli.

La prima prova è venuta un po' troppo carica di caffè. Sui broccoli però ci sta benino.

Posted Fri Mar 8 19:17:04 2013 Tags:

Aubergine soup

I, too, have been guilty of discovering a shrivelled aubergine in the bottom of the fridge, and I think I improved on the recipe a bit.

First I softened the onion in butter, then I added the crushed garlic, the aubergine peeled and diced, and the cumin seeds. I let them all roast in the pan for a while, until the aubergines took some colour, then I added the stock. Carefully, as the pan with the roasting aubergines is far above 100°C and the first splash of water turns into steam very quickly.

When it was all soft and yummy, I added two spoonfuls of tahini, as a thickener, a generous grating of nutmeg, and blended the lot.

What came out is basically a soup version of baba ganoush, and it is yummy!

Posted Thu Nov 8 21:15:29 2012 Tags:

Spaghetti con friggitelli e mozzarella

Dosi per 4 persone:

  • 300 gr di spaghetti
  • 300 gr di friggitelli
  • 70 gr di mollica di pane
  • 125 gr di mozzarella
  • parmigiano grattugiato q.b.
  • 4 foglioline di menta
  • olio extra-vergine q.b.
  • sale & pepe

Laviamo i peperoncini, togliamo i semini e il picciolo, asciughiamoli e tagliamoli a striscette.

Tritiamo nel mixer la mollica di pane e doriamola in padella con 3 cucchiai di l'olio, finchè non diventerà croccanate e, mettiamolo da parte.

Tagliamo la mozzarella a dadini e teniamo anch'essa da parte.

Scaldiamo altri 5 cucchiai di olio, uniamo i peperoncini e facciamoli cuocere a fiamma viva per 10 minuti, regolandoli di sale e pepe.

Cuociamo la pasta, scoliamola e ripassiamola in padella con i peperoncini.

Facciamo saltare il tutto a fiamma vivace per qualche minuto.

Aggiungiamo la mozzarella a dadini, il pane croccante e il parmigiano.

Come tocco finale, uniamo la menta spezzettata e serviamo.

(via http://friggitelli.it/)

Fatta oggi, buona. Per le briciole di pane ho usato un avanzo della farcitura dei carciofi alla romana di ieri sera.

Con l'acqua di cottura dei carciofi, stasera risotto.

Posted Mon Apr 30 16:08:55 2012 Tags:

Risotto ai funghi (e un po' di banana)

Avevo voglia di sperimentare, e in casa avevo delle banane. Cosa ci si può fare, con delle banane?

Entra in gioco http://www.foodpairing.be/

Questo sito raggruppa vari ingredienti in base alla comunanza delle sostanze chimiche che gli danno il sapore. E chi c'è vicino alla banana? I FUNGHI!

Facciamo quindi un risotto coi funghi: solito fondo di cipolla soffritta nel burro finché non diventa trasparente, e poi giú un pezzetto di banana tagliato a pezzettini sottili, a soffriggere anche lui e a caramellarsi un po'. Infine, qualche pezzetto di porcino secco rinvenuto in acqua tiepida.

Aggiungiamo poi il riso, lasciamolo soffriggere anche lui un po' nell'intingolo, e poi allunghiamo col brodo (io avevo un dado apposta per il risotto ai funghi comprato nel vicino negozietto di cose belle).

Niente sale, pepe, burro per mantecare, niente. Una volta cotto, l'ho solo lasciato a riposare per 5 minuti.

Il risultato è stato delizioso. "Ci hai messo la panna? Come fa a essere cosí cremoso?". Saporito ma non dolce. E la banana si sente che c'è, ma non si sente che è banana.

Da oggi mi sa che nel risotto ai funghi il mio ingrediente segreto sarà un pezzetto di banana.

Da notare che, come si legge in http://khymos.org/pairings.php, la banana sta anche bene col prezzemolo, e cosí i funghi. Mi son scordato il prezzemolo nel risotto in questo esperimento, ma ci sarà nel prossimo: ce l'abbiamo anche fresco in giardino. La stessa pagina parla anche di un probabile abbinamento molecolare cacao-funghi... chissà.

Pagine collegate:

Posted Sat Jun 6 00:57:39 2009 Tags:

Fagottini di pollo agli spinaci

È da tempo che cerco di capire come cucinare una buona bistecca, e finalmente ho trovato un sito di cucina che parla la mia lingua.

Giochiamo quindi con la Reazione di Maillard. Dopo un discreto successo con una bistecchina da quattro soldi, è venuto il momento di cimentarsi col pollo, che è l'unica carne che piace alla morosa.

La carne di pollo ha proteine, ma non abbastanza zuccheri perché avvenga la reazione di Maillard. Ergo, mariniamo la carne in qualcosa che contenga zuccheri.

Guggolando "pollo" e "marinata", esce questa bella ricetta "Petti di pollo ripieni al miele e aceto balsamico". La ricetta dice: "rosolatevi il pollo (3-4 minuti per lato, a fuoco medio)", ma io Maillard lo volevo guardare negli occhi e "fuoco medio" non mi bastava, e poi in Inghilterra non si trova lo speck, e io in casa avevo degli spinaci e non dell'"insalata Tatsoi", ergo, ho pistolato la ricetta come mio solito:

Arrosto di pollo ripieno di spinaci

Ingredienti:

  • 3 petti di pollo a fette
  • 3 fette di bacon magro, senza cotenna (siamo in Inghilterra...)
  • 6-7 cubetti di spinaci surgelati
  • 2 cucchiai di miele (meglio se d’Acacia)
  • 2 cucchiai di aceto balsamico
  • 1 cucchiaio di salsa di soia
  • 1 cipollotto
  • peperoncino secco sminuzzato
  • olio, sale, pepe nero
  • vari spicchi di aglio

Ho scongelato gli spinaci in un tegamino a fuoco basso, assieme a 4 o 5 spicchietti d'aglio schiacciati.

Nel frattempo ho fatto la marinata con miele, aceto balsamico, salsa di soia, il cipollotto tagliato finemente, uno spicchio o due d'aglio schiacciato, il peperoncino, due cucchiai d'olio, sale, pepe.

Ho poi spiattellato un po' i petti di pollo, ci ho messo sopra una fetta di pancetta, poi ho intonacato con uno strato di spinaci, arrotolato il tutto e legato con lo spago.

Pronti i fagotti li ho messi a mollo nella marinata. Li ho lasciati lí una buona mezz'oretta poi li ho girati e li ho lasciati lí un'altra mezz'oretta, in modo che si impregnassero e colorassero bene.

A questo punto, ho dato la molla al forno a 180° (per dopo) e ho messo sul fuoco una padella (io ho usato il wok antiaderente) con un pochino d'olio d'oliva.

Mi sono assicurato di non far danni con la fiamma vivace: la reazione di Maillard avviene oltre i 140°, il punto di fumo dell'olio d'oliva è dai 190° ai 240°, e quello del teflon dell'antiaderente è di 300°, quindi i margini ci sono.

Fiamma alta, olio caldo, giú il primo fagotto di pollo: due minuti per lato, col tegame coperto per limitare i danni degli schizzi. Ogni fagotto fatto da entrambi i lati l'ho poi messo in una pirofila, ci ho versato sopra un filo d'olio e ho messo tutto in forno per una 20ina di minuti, per stare nel sicuro perché, seppure a fiamma alta, 2 minuti per lato non mi sembravano abbastanza per cuocere il pollo e il bacon all'interno.

Tra un fagotto e l'altro vale la pena togliere dal tegame il grosso dei fondi e metterlo da parte, altrimenti a star lí per 3 fagotti su 2 lati c'è il rischio che bruci. Alla fine, col tegame bello incrostato, ci ho versato del vino, ho aggiunto i fondi messi da parte, e col fuoco basso e il cucchiaio di legno ho scrostato il tutto. Ho poi aggiunto un po' di zucchero per contrastare l'aspro del vino e ho lasciato restringere, dopodiché ho filtrato col colino e ho ottenuto una salsina deliziosa da cospargenere sui fagotti al momento di servire.

Purtroppo non ho la foto perché, vuoi l'aspetto vuoi il profumo, tutti e tre i fagotti sono spariti prima che ci venisse in mente di fare la foto.

Siccome era rimasto dell'unto invitante nel fondo della pirofila e il forno era ancora caldo, ci ho poi arrostito delle patate al forno. In Italia ci saranno 40 gradi, ma qui si fa fatica ad arrivare a 20.

Il tutto, annaffiato da una bottiglia di dolcetto del monferrato che trovammo tempo fa in sconto al supermercato: saporito com'era il pollo, un vino bianco non avrebbe avuto speranza.

Posted Sat Jun 6 00:57:39 2009 Tags:
Posted Sat Jun 6 00:57:39 2009

Pages about Debtags.

Evolution's old odd mail folders to mbox

Something wrong happened in my dad's Evolution. It just would get stuck checking mail forever, with no useful diagnostic that I could find. Fun. Not.

Anyway, I solved by resetting everything to factory defaults, moving away all gconf entries and .evolution/ files. Then it started to work again, of course then I needed to reconfigure it from scratch.

It turned out however that some old mail was only archived locally, and in a kind of weird format that looks like this:

$ ls -la Enrico/
total 336
drwx------ 2 enrico enrico   4096 Jul 23 03:05 .
drwxr-xr-x 7 enrico enrico   4096 Jul 23 03:12 ..
-rw------- 1 enrico enrico   3230 Dec  4  2010 113.HEADER
-rw------- 1 enrico enrico  14521 Dec  4  2010 113.TEXT
-rw------- 1 enrico enrico   3209 Oct 22  2010 134.HEADER
-rw------- 1 enrico enrico   2937 Oct 22  2010 134.TEXT
-rw------- 1 enrico enrico   3116 Jun 27  2011 15.
-rw------- 1 enrico enrico   3678 Jun 27  2011 168.
-rw------- 1 enrico enrico     73 Apr 27  2009 22.1.MIME
-rw------- 1 enrico enrico   3199 Apr 27  2009 22.2
-rw------- 1 enrico enrico     88 Apr 27  2009 22.2.MIME
[...]

I couldn't even find the name of that mail folder layout, let alone conversion tools. So I had to sit down and waste my sunday break writing software to convert that to a mbox file. Here's the tool, may it save you the awful time I had today: http://anonscm.debian.org/gitweb/?p=users/enrico/evo2mbox.git

Note: feel free to fork it, or send patches, but don't bother with feature requests. Evolution isn't and won't be a personal interest of mine. Anything that makes an afternoon at my parents more tiresome than a whole busy month of paid work, doesn't deserve to be.

Luckily they now seem to have changed the local folder format to Maildir.

Posted Mon Jul 23 03:27:50 2012 Tags:

Giving away distromatch

at last year's Fosdem I tried to inject a lot of energy into distromatch but shortly afterwards I've had to urgently rewrite the nm.debian.org website.

After Lars Wirzenius GTDFH talks in Bologna and Varese I wrote a tool which, among other things, is able to scan my home dir and list how many projects I'm working on.

The output was scary. Like, they are too many. Like, I couldn't even recite the list out of memory. And since I couldn't do that, I had no idea there were so many. And I kept being stressful because I couldn't manage to take care of them all properly.

Now that I became conscious of the situation, it's time to deal with it like a grown up, and politely back off from some of my irresponsible responsibilities.

Distromatch is one of them. It had just started as a proof of concept prototype, and I had the vision that it could be the basis for a fantastic culture of sharing and exchange of information across distributions.

I need to distinguish the vision from the responsibility. I still have that vision for distromatch, but I cannot take responsibility for making it happen.

So I am giving it up to anyone who has the time and resources to pick up that responsibility.

Current status

It works well enough as a prototype. I believe it can successfully map a large enough slice of packages, that one can prototype stuff based on it.

I have for example used it to export the Debtags categories for other distros, and the resulting file looked big enough to be used for prototyping category-based features on distributions that don't have them yet.

I think it also works well enough to support a few common use cases, like sharing screenshots, or doing most of the work of converting dependency lists from a distro to another.

And finally, anyone can deploy it, and work on it.

Existing data sources

Everything I index in the Debian distromatch deployment is available at http://dde.debian.net/exports/distromatch/. The rpm-based data in there comes from an export script I wrote that runs on Sophie, but which I cannot maintain properly.

This is an experimental export of Fedora and OpenSUSE data: http://tmp.vuntz.net/misc/distromatch/distromatch-opensuse-fedora.tar

All existing export scripts are found in distromatch git repo on gitorious.

Contacts I gathered at Fosdem

At Fosdem I devoted quite some work to get contacts from all possible distributions and software repositories, so that distromatch could be hooked into them. Here is a dump of what I have collected:

  • Debian: me
  • OpenSuse: Vincent Untz and Adrian Schröter
  • Fedora: Tom "Spot" Callaway
  • Arch: Tasser on IRC
  • CPAN: contact the people of https://metacpan.org/, on irc.perl.org:#metacpan or make an issue on github
  • NetBSD: ask on #netbsd on Freenode
  • FreeBSD: Baptiste Daroussin (bapt)
  • Mageia: Olivier Thauvin

Some of those contacts may have "expired" in the meantime: I wouldn't assume all of them still remember talking with me, although most probably still do.

My commitment for the time being

I am happy to commit, at the moment, to maintaining a working data export for Debian data. I can take responsibility for making it so that the Debian data for it stays up to date, and to fix it asap if it isn't the case.

I hope that now someone can take distromatch over from me, and make it grow to achieve its great potential.

Posted Sat Jul 21 16:54:18 2012 Tags:

More diversity in Debian skills

This blog post has been co-authored with Francesca Ciceri.

In his Debconf talk, zack said:

We need to understand how to invite people with different backgrounds than packaging to join the Debian project [...] I don't know what exactly, but we need to do more to attract those kinds of people.

Francesca and I know what we could do: make other kinds of contributions visible.

Basically, we should track and acknowledge the contributions of webmasters, translators, programmers, sysadmins, event organisers, and so on, at the same level as what we do for packagers: DDPO, minechangelogs, Portfolio...

For any non-packaging activity that we can make visible and credited, we get:

  • to acknowledge the people who do it, and show that they are active contributors in the project;

  • to acknowledge the work that gets done, and show the actual amount of non-packaging work that gets done in Debian every day;

  • to allow non-packagers to have a reputation, too: first of all, they deserve it, and among other things, it would make nm processing trivial.  

Here's an example: who's the lead translator for German? And if you are German, who's the lead translator for Spanish? Czech? Thai? I (Enrico) don't know the answers, not even for Italian, but we all should! Or at least it should be trivial to find out.

To start to change this, is just a matter of programming.

Francesca already worked on a list of trackable data sources, at least for translators.

Here are some more details, related to translation:

  • Translations can be tracked via the i18n robot (and relative statistics). This works only with teams who activated the robot and actively use the pseudo-urls in their messages on localisation mailing lists. Some translators don't bother to do it but it's ok to only support the main workflow. It beats extracting .po files from l10n-tagged BTS bugs at any rate.

  • DPN and website translations: for wml pages there's a specific field to be extracted for each translated page: grep for maintainer="name" on normal wml pages, while for DPN translations we have a specific translator="name" field. The problem is that this field is not mandatory, so sometimes there's no indication of the maintainer. Again, it's ok to only support the main workflow.

    Anyway, this is preferable to the cvs log: often the commit is done by the coordinator of the team and not by the actual translator. See above for the alternative solution of using the statistics provided by the i18n bot.

  • DDTSS: since the new release of DDTSS-Django, done by Martijn van Oosterhout about a year ago, the contributions are by default non-anonymous. This should be easy to track.

  • http://wiki.debian.org: it is more complicated because in the wiki we do not have a proper l10n translation workflow, so the only thing that can be tracked are changelogs $LANG/* pages. A nice idea would be to have translated pages list the version of the page that was translated and who did the translation.

  • translation of debian manuals and release notes: usually in the translation of manuals and long documentation there is a specific translator field.

And here are some notes about other fields:

  • DPN editors: for each issue there's a list of editors at the bottom of the page. In the wml: grep for editor=.

  • Artwork: artwork submitted via debianart are easy to track on the portal. Anyway usually you can find the author in the license and copyright file.

  • Programming: the only thing we have is the list of services which can be expanded if needed.

  • Press and publicity: there seems to be not much besides svn logs.

  • l10n-english: The Smith Review Project page has some tracking links. Other activities can probably only be tracked, at the moment, via mailing list activity.

  • Events: we can use the "main coordinator" field on www.debian.org/events/$year/$date-$eventname.wml: grep for <define-tag coord>; for events not published on the http://www.debian.org, but only on http://wiki.debian.org, the coordinator or the contact for the event is usually present on the page itself.

  • Sysadmins: we haven't asked DSA.

And finally, if you are still wondering who those translation coordinators are, they are listed here, although not all teams keep that page up to date.

Of course, when a data source is too hard to mine, it can make sense to see if the workflow could be improved, rather than spending months writing compicated mining code.

This is a fun project for people at Debconf to get together and try.

If by the end of the conference we had a way to credit some group of non-packaging contributors, even if just one like translators or website contributors, at least we would finally have started having official trackers for the activities of non-packagers.

Posted Thu Jul 12 14:01:54 2012 Tags:

Debtags for derivative distributions

Sometimes I do cool stuff and I forget to announce it.

Ok, so I recently announced a new Debtags website.

I forgot to say in the announcement that the new website does not only know of Debian packages: see for example this page, at the very bottom it says: "Distributions: oneiric, precise, sid, testing".

This means that already, here and now, debtags.debian.net can be used to tag packages from both Debian and Ubuntu, and can easily be extended to cover the entire Debian ecosystem.

If you are a package maintainer, you will notice that your maintainer page shows your packages from everywhere. If you want to filter things a bit, for example hide obsolete packages from an old Debian Stable or Ubuntu LTS, just click on the "Settings" link on the top right to configure the page.

How it works

The magic is in this mergepackages script, which is run daily, and exports merged Packages files at dde.debian.net. The debtags.debian.net concept of Packages and Sources files are just those all-merged.gz and all-merged-sources.gz.

The merging is simple: that rebuild script processes files in order, and the first version of a package that is found is chosen as the base for the one that will go in the merged Packages file. Some fields like "Description" are just taken from this pivot package, others like Architecture or dependencies are merged into it. It's arbitrary, but works for me: the result has all the packages with all their possible architectures and dependencies, and is ready to be indexed with apt-xapian-index.

At the moment I pull data from Debian and Ubuntu, but you can see that the script can easily be extended to pull data from any Debian-style ftp archive, so any Debian derivative can go in. I've already started negotiations with the Derivatives Census on how to add any Debian derivative and keep the list up to date.

How to export tags for your own distribution

I'll use Ubuntu as an example since the data is already available.

The way you add Debtags to the Ubuntu packages file is just this one:

  1. Get the full reviewed tag database
  2. Optionally filter out those packages that you are not interested in
  3. Tweak this script to build an overrides file.
  4. Give the overrides file to your favourite ftp archive building tool.

The make-overrides is a bit rusty: if you improve it, please send me your changes.

That is it, nothing else required, no excuses, it's ready, here, now!

Hitches and gotchas

This merged Packages file is a bit of a hack, and suffers from name conflicts across distributions, where two different softwares are packaged in two different distributions with the same name.

Ideally, name conflicts should not happen: if a derivative decided to package kate and call it gedit, they deserve to have it tagged uitoolkit::gtk. I think it's rather important that the whole Debian ecosystem works as much as possible with a single package namespace.

However, that reasoning fails if you take time into account: packages get renamed, like git and chromium, and may mean completely different things, for example, if you compare Debian Stable with Debian Sid.

This last is a problem caused by debtags only working with package names but not package versions. I have a strategy in mind based on being able to override the stable tag database using headers in debian/control; it still needs some details sorted out, but I'm confident we will be able to address these issues properly soon enough.

Why stop at the Debian ecosystem?

Why indeed. I'm clearly trying to use FOSDEM, and the CrossDistribution devroom as the venue to discuss just that.

Posted Fri Jan 20 15:12:33 2012 Tags:

Deploying distromatch

I have been working on allowing anyone to set up their own distromatch instance.

For Debian and Ubuntu, I can easily generate the distromatch input using UDD and the Contents files found in any mirrors.

For the whole RPM world, thanks to Olivier Thauvin I have been able to set up regular exports from the vast Sophie database.

I have set up distromatch access on DDE, which can also serve as a list of all working distributions so far. If you have access to the full dataset of package names and package contents for a distribution not in that list, please get in touch and we can add it.

I'm also exporting the full raw dataset which enables anyone to set up the same distromatch environment on their own machines.

Here is how:

# Get distromatch
git clone git://gitorious.org/appstream/distromatch.git
cd distromatch

# Fetch distribution information (updated every 2 days)
wget http://dde.debian.net/exports/distromatch-all.tar.gz

# Unpack it
mkdir data
tar -C data -zxf distromatch-all.tar.gz

# Reindex it (use --verbose if you are curious)
./distromatch --datadir=data --reindex --verbose

# Run it
./distromatch --datadir=data debian gedit

What does this mean? For example it means that if another distribution has some data (categories, screenshots...) that your distribution doesn't have, you can use distromatch to translate package names, then go and get it!

My next step is going to be to improve the distromatch functionality in DDE and possibly build a simple user friendly web interface to it. If you have some JQuery experience and would like to help, don't wait to get in touch.

Posted Fri Feb 18 13:46:30 2011 Tags:

update-apt-xapian-index on other distros

I've drafted a little HOWTO on using apt-xapian-index on non-Debian distributions.

The procedure has been tried on Mageia with some success, and there's no reason it wouldn't work everywhere else: the index itself does not depend on anything distro-specific.

Posted Tue Jan 25 23:01:45 2011 Tags:

Match package names across distributions

What would happen if we had a quick and reliable way to match package names across distributions?

These ideas came up at the appinstaller2011 meeting:

  • it would be easy to lookup screenshots in the local distro, and if there are none then fall back on other distributions;
  • it would be easy to port Debtags to other distributions, and possibly get changes back;
  • it would be trivial to add a [patches in $DISTRO] link to the PTS
  • it would be easy to point to other BTSes

We thought they were good ideas, so we started hacking.

To try it, you need to get the code and build the index first:

git clone git://git.debian.org/users/enrico/distromatch.git
cd distromatch
# Careful: 90Mb
wget http://people.debian.org/~enrico/dist-info.tar.gz
tar zxf dist-info.tar.gz
# Takes a long time to do the indexing
./distromatch --reindex --verbose

Then you can query it this way:

./distromatch $DISTRO $PKGNAME [$PKGNAME1 ...]

This would give you, for the package $PKGNAME in $DISTRO, the corresponding package names in all other distros for which we have data. If you do not provide package names, it automatically shows output for all packages in $DISTRO.

For example:

$ time ./distromatch debian libdigest-sha1-perl
debian:libdigest-sha1-perl fedora:perl-Digest-SHA1
debian:libdigest-sha1-perl mandriva:perl-Digest-SHA1
debian:libdigest-sha1-perl suse:perl-Digest-SHA1

real    0m0.073s
user    0m0.056s
sys 0m0.016s

Yes it's quick. It builds a Xapian index with the information it needs, and then it reuses it. As soon as I find a moment, I intend to deploy an instance of it on DDE.

It is using a range of different heuristics:

  • match packages by name;
  • match packages by desktop files contained within;
  • match packages by pkg-config metadata files contained within;
  • match packages by [/usr]/bin/* files contained within;
  • match packages by shared library files contained within;
  • match packages by devel library files contained within;
  • match packages by man pages contained within;
  • match stemmed form of development library package names;
  • match stemmed form of shared library package names;
  • match stemmed form of perl library package names;
  • match stemmed form of python library package names.

This list may get obsolete soon as more heuristics get implemented.

Euristics will never cover all corner cases we surely have, but the idea is that if we can match a sizable amout of packages, the rest can be somehow fixed by hand as needed.

The data it requires for a distribution should be rather straightforward to generate:

  1. a file which maps binary package names to source package names
  2. a file with the list of files in all the packages

For example:

$ ls -l dist-debian/
total 39688
-rw-r--r--  1 enrico enrico  1688249 Jan 20 17:37 binsrc
drwxr-xr-x  2 enrico enrico     4096 Jan 21 19:12 db
-rw-r--r--  1 enrico enrico 29960406 Jan 21 10:02 files.gz
-rw-r--r--  1 enrico enrico  8914771 Jan 21 18:39 interesting-files

$ head dist-debian/binsrc 
openoffice.org-dev openoffice.org
ext4-modules-2.6.32-5-4kc-malta-di linux-kernel-di-mipsel-2.6
linux-headers-2.6.30-2-common linux-2.6
libnspr4 nspr
ipfm ipfm
libforks-perl libforks-perl
med-physics debian-med
libntfs-3g-dev ntfs-3g
libguppi16 guppi
selinux selinux

$ zcat dist-debian/files.gz | head
memstat etc/memstat.conf
memstat usr/bin/memstat
memstat usr/share/doc/memstat/changelog.gz
memstat usr/share/doc/memstat/copyright
memstat usr/share/doc/memstat/memstat-tutorial.txt.gz
memstat usr/share/man/man1/memstat.1.gz
libdirectfb-dev usr/bin/directfb-config
libdirectfb-dev usr/bin/directfb-csource
libdirectfb-dev usr/include/directfb-internal/core/clipboard.h
libdirectfb-dev usr/include/directfb-internal/core/colorhash.h

interesting-files and db are generated when indexing.

To prove the usefulness of the idea (but does it need proving?), you can find in the same git repo a little example app (it took me 10 minutes to write it), that uses the distromatch engine to export Debtags tags to other distributions:

$ ./exportdebtags fedora | head
memstat: admin::benchmarking, interface::commandline, role::program, use::monitor
libdirectfb-dev: devel::lang:c, devel::library, implemented-in::c, interface::framebuffer, role::devel-lib
libkonqsidebarplugin4a: implemented-in::c++, role::shared-lib, suite::kde, uitoolkit::qt
libemail-simple-perl: devel::lang:perl, devel::library, implemented-in::perl, role::devel-lib, role::shared-lib, works-with::mail
libpoe-component-pluggable-perl: devel::lang:perl, devel::library, implemented-in::perl, role::shared-lib
manpages-ja: culture::japanese, made-of::man, role::documentation
libhippocanvas-dev: devel::library, qa::low-popcon, role::devel-lib
libexpat-ocaml-dev: devel::lang:ocaml, devel::library, implemented-in::c, implemented-in::ocaml, role::devel-lib, works-with-format::xml
libgnutls-dev: devel::library, role::devel-lib, suite::gnu

Just in case this made you itch to play with Debtags in a non-Debian distribution, I've generated the full datasets for Fedora, Mandriva and OpenSUSE.

Others have been working on the same matching problem. After we started writing code we started to become aware of existing work:

I'd like to make use of those efforts, maybe to cross-validate results, maybe even better as yet another heuristics.

Update:

I built a simple distromatch query system into DDE!

Posted Sat Jan 22 01:40:50 2011 Tags:

Cross-distro Meeting on Application Installer

I have been to a Cross-distro Meeting on Application Installer which to the best of our knowledge is also the first one of its kind. Credit goes to Vincent Untz for organising it, to OpenSUSE for hosting it and to the various sponsors for getting us there.

It went surprisingly well. We got along, got stuff done, did as much work as possible to agree on as many formats, protocols and technologies as we possibly could.

The timing of it is very important, as most major distros would like to adopt some of the features that just became popular in the various new app markets and stores, such as screenshots, user comments and ratings. It looks like a lot of new code is about to be written, or a lot of existing code is about to gain quite a bit of popularity.

For my part, I presented the work on Debtags and apt-xapian-index.

With regards to Debtags, other distros seem to be missing a compehensive classification system, and Debtags is, well, it.

With regards to apt-xapian-index, we just noticed that it's the perfect back-end for what everyone would like to do, and the index structure is rather distribution-agnostic, and it's been road-tested with considerable success by at least software-center, so it attracted quite a bit of interest, and will likely attract some more.

Just to prove a point I put together a prototype webby markety appy thing in just a few hours of work.

The meeting was also the ideal place to create a joint effort to match package names across distributions, which means that a lot of things that were hard to share before, such as screenshots, tags and patches, are suddenly not hard to share anymore.

Posted Sat Jan 22 01:40:50 2011 Tags:

A prototype webby markety appy thing

What better way to introduce my work at an Application Installer meeting than to come with a prototype package browser modeled after shopping sites developed in just a few hours?

It's a little Flask webapp that just works on any Debian system, using the local apt-xapian-index as a backend. It has fast keyword search, faceted navigation and screenshots, and it runs on your system showing the packages that you have available.

Screenshot of packageshelf

To try it:

git clone git://git.debian.org/users/enrico/pkgshelf.git
cd pkgshelf
./web-server.py

Then visit http://localhost:5000

It didn't have much interface polishing, as it's just a quick technology demo. However you can see that:

  • keyword search is fast (fast enought that it could be made to search as you type);
  • relevant tags appear on the left, grouped by facets;
  • the most relevant tags are highlighted;
  • the less relevant tags could be hidden behind a [more] expander;
  • you can choose several strategies to hide packages you may find irrelevant.

Things that need doing:

  • hiding uninteresting facets;
  • making it pretty.

It's essentially JavaScript and CSS work. Anyone wants to play?

Posted Sat Jan 22 01:40:50 2011 Tags:

fuss-launcher: an application launcher built on apt-xapian-index

Long ago I blogged about using apt-xapian-index to write an application launcher.

Now I just added a couple of new apt-xapian-index plugins that look like they have been made just for that.

In fact, they have indeed been made just for that.

After my blog post in 2008, people from Truelite and the FUSS project took up the challenge and wrote a launcher applet around my example engine.

The prototype has been quite successful in FUSS, and as a consequence I've been asked (and paid) to bring in some improvements.

The result, that I have just uploaded to NEW, is a package called fuss-launcher:

* New upstream release
   - Use newer apt-xapian-index: removed need of local index
   - Dragging a file in the launcher shows the applications that can open it
   - Remembers the applications launched more frequently
   - Allow to set a list of favourite applications

To get it:

  • apt-get install fuss-launcher (after it passed NEW);
  • or git clone http://git.fuss.bz.it/git/launcher.git/ and apt-get install python-gtk2 python-xapian python-xdg apt-xapian-index app-install-data

It requires apt-xapian-index >= 0.35.

To try it:

  1. Make sure your index is up to date, especially if you just installed app-install-data: just run update-apt-xapian-index as root.
  2. Run fuss-launcher.
  3. Click on the new tray icon to open the launcher dialog.
  4. Type some keywords and see the list of matching applications come to life as you type.

It's worth mentioning again that all this work was sponsored by Truelite and the Fuss project, which rocks.

Some screenshots:

When you open the launcher, by default it shows the most frequently started applicationss and the favourite applications:

launcher just opened

When you type some keywords, you get results as you type, and context-sensitive completion:

keyword search

When you drag a file on the launcher you only see the applications that can open that file:

drag files to the launcher

Posted Mon May 17 10:41:09 2010 Tags:
Posted Sat Jun 6 00:57:39 2009
dcg

Debian Community Guidelines.

Debconf6 talks material now online

I've finally put online slides and notes for my debconf6 talks:

Many people had asked me the notes for the "Advanced ways of wasting time" talk: they're finally online, translated and with the links pointing to English Wikipedia pages. Sorry it took me so long.

Posted Sat Jun 6 00:57:39 2009 Tags:

DCG mentioned on Linux.com

A friend of mine pointed me to the Debian Community Guidelines being mentioned in a linux.com article.

"[Mako] hopes that Garrett's resignation will give the Debian community an added impetus to adapt its own code of conduct, like the one proposed by Enrico Zini."

I'm very happy to see the DCG geting mentioned, although I don't think that it makes sense to 'adopt' such a document.

what I'd like for it is to be mentioned as a suggested read, and linked from here and there. So today it happened, and I'm happy :)

Posted Sat Jun 6 00:57:39 2009 Tags:

Reorganization of the DCG

I've recently received a substantial amount of feedback about the Debian Community Guidelines and went into some reorganization of it.

The previous general section still stands as the Main Guidelines: those are the substantial few things to always keep in mind.

What previously was the long list of checklists is now split in two: the Debian-specific Guidelines, which should be a shorter lists of non-obvious suggestions for people who already have experience with online life, and the General Guidelines, with the fuller checklists with useful suggestions for everyone.

I still haven't gone through the selection and reorganization of the Debian-specific and General part, so at the moment they look fairly similar and most things overlap. But the good news is that I finally found a structure that I like, and that can allow more experienced people to make use of the guidelines without getting bored with simpler things like "google before asking a question".

This division also suggests a little workflow: new suggestions can be added to the Debian-specific part, and then later moved to the general part when they become obvious for everyone.

I'm happy. This layout seems to be good in getting me unstuck with how to think of the DCG. More will come of course, as I'll prepare my DCG talk for Debconf6.

Posted Sat Jun 6 00:57:39 2009 Tags:

Converging to a solution

Sustain a discussion towards solving a problem is sometimes more important than solving the problem.

I can't decide if this is trivial or counterintuitive. Anyway it's been quite enlightening when it came out. I once took this note:

I found that with my projects, when someone posted a mail about a problem I would work maybe some days to find a solution, and just post the solution at the end.

However, now I realised it's more costructive to have the problem-solving process itself happen online. This way, instead of keeping people waiting in silence for a few days they can get quicker feedback and extra informations, and they also have a chance to participate to solving the problem before I manage to.

For example, when I have to interrupt to go home or sleep, someone else can pick it up and do another step.

Plus, the entire problem-solving process remains documented, which will provide more written information for future readers.

This note was from a few months ago; however, I still fail to do it. Bad habits are sometimes hard to change. Please kick me about it :)

Posted Sat Jun 6 00:57:39 2009 Tags:
Posted Sat Jun 6 00:57:39 2009
pdo

Pages exported to http://planet.debian.org.

HTML5 Server-sent events

I have a Django view that runs a slow script server-side, and streams the script output to Javascript. This is the bit of code that runs the script and turns the output into a stream of events:

def stream_output(proc):
    '''
    Take a subprocess.Popen object and generate its output, line by line,
    annotated with "stdout" or "stderr". At process termination it generates
    one last element: ("result", return_code) with the return code of the
    process.
    '''
    fds = [proc.stdout, proc.stderr]
    bufs = [b"", b""]
    types = ["stdout", "stderr"]
    # Set both pipes as non-blocking
    for fd in fds:
        fcntl.fcntl(fd, fcntl.F_SETFL, os.O_NONBLOCK)
    # Multiplex stdout and stderr with different prefixes
    while len(fds) > 0:
        s = select.select(fds, (), ())
        for fd in s[0]:
            idx = fds.index(fd)
            buf = fd.read()
            if len(buf) == 0:
                fds.pop(idx)
                if len(bufs[idx]) != 0:
                    yield types[idx], bufs.pop(idx)
                types.pop(idx)
            else:
                bufs[idx] += buf
                lines = bufs[idx].split(b"\n")
                bufs[idx] = lines.pop()
                for l in lines:
                    yield types[idx], l
    res = proc.wait()
    yield "result", res

I used to just serialize its output and stream it to JavaScript, then monitor onreadystatechange on the XMLHttpRequest object browser-side, but then it started failing on Chrome, which won't trigger onreadystatechange until something like a kilobyte of data has been received.

I didn't want to stream a kilobyte of padding just to work-around this, so it was time to try out Server-sent events. See also this.

This is the Django view that sends the events:

class HookRun(View):
    def get(self, request):
        proc = run_script(request)
        def make_events():
            for evtype, data in utils.stream_output(proc):
                if evtype == "result":
                    yield "event: {}\ndata: {}\n\n".format(evtype, data)
                else:
                    yield "event: {}\ndata: {}\n\n".format(evtype, data.decode("utf-8", "replace"))

        return http.StreamingHttpResponse(make_events(), content_type='text/event-stream')

    @method_decorator(never_cache)
    def dispatch(self, *args, **kwargs):
        return super().dispatch(*args, **kwargs)

And this is the template that renders it:

{% extends "base.html" %}
{% load i18n %}

{% block head_resources %}
{{block.super}}
<style type="text/css">
.out {
    font-family: monospace;
    padding: 0;
    margin: 0;
}
.stdout {}
.stderr { color: red; }
.result {}
.ok { color: green; }
.ko { color: red; }
</style>
{# Polyfill for IE, typical... https://github.com/remy/polyfills/blob/master/EventSource.js #}
<script src="{{ STATIC_URL }}js/EventSource.js"></script>
<script type="text/javascript">
$(function() {
    // Manage spinners and other ajax-related feedback
    $(document).nav();
    $(document).nav("ajax_start");

    var out = $("#output");

    var event_source = new EventSource("{% url 'session_hookrun' name=name %}");
    event_source.addEventListener("open", function(e) {
      //console.log("EventSource open:", arguments);
    });
    event_source.addEventListener("stdout", function(e) {
      out.append($("<p>").attr("class", "out stdout").text(e.data));
    });
    event_source.addEventListener("stderr", function(e) {
      out.append($("<p>").attr("class", "out stderr").text(e.data));
    });
    event_source.addEventListener("result", function(e) {
      if (+e.data == 0)
          out.append($("<p>").attr("class", "result ok").text("{% trans 'Success' %}"));
      else
          out.append($("<p>").attr("class", "result ko").text("{% trans 'Script failed with code' %} " + e.data));
      event_source.close();
      $(document).nav("ajax_end");
    });
    event_source.addEventListener("error", function(e) {
      // There is an annoyance here: e does not contain any kind of error
      // message.
      out.append($("<p>").attr("class", "result ko").text("{% trans 'Error receiving script output from the server' %}"));
      console.error("EventSource error:", arguments);
      event_source.close();
      $(document).nav("ajax_end");
    });
});
</script>
{% endblock %}

{% block content %}

<h1>{% trans "Processing..." %}</h1>

<div id="output">
</div>

{% endblock %}

It's simple enough, it seems reasonably well supported besides needing a polyfill for IE and, astonishingly, it even works!

Posted Sun Dec 14 16:32:10 2014 Tags:

SSL "protection"

In my experience with my VPS, setting up pretty much any service exposed to the internet, even a simple thing to put a calendar in my phone requires an SSL certificate, which costs money, which needs to be given to some corporation or another.

When the only way to get protection from a threat is to give money to some big fish, I feel like I'm being forced to pay protection money.

I look forward to this.

Posted Thu Dec 11 15:35:25 2014 Tags:

Radicale and DAVDroid

radicale and DAVdroid appeal to me. Let's try to make the whole thing work.

A self-signed SSL certificate

Generating the certificate:

    openssl req -nodes -x509 -newkey rsa:2048 -keyout cal-key.pem -out cal-cert.pem -days 3650
    [...]
    Country Name (2 letter code) [AU]:IT
    State or Province Name (full name) [Some-State]:Bologna
    Locality Name (eg, city) []:
    Organization Name (eg, company) [Internet Widgits Pty Ltd]:enricozini.org
    Organizational Unit Name (eg, section) []:
    Common Name (e.g. server FQDN or YOUR name) []:cal.enricozini.org
    Email Address []:postmaster@enricozini.org

Installing it on my phone:

    openssl x509 -in cal-cert.pem -outform DER -out cal-cert.crt
    adb push cal-cert.crt /mnt/sdcard/
    enrico --follow-instructions http://davdroid.bitfire.at/faq/entry/importing-a-certificate

Installing radicale in my VPS

An updated radicale package, with this patch to make it work with DAVDroid:

    apt-get source radicale
    # I reviewed 063f7de7a2c7c50de5fe3f8382358f9a1124fbb6
    git clone https://github.com/Kozea/Radicale.git
    Move the python code from git to the Debian source
    dch -v 0.10~enrico  "Pulled in the not yet released 0.10 work from upstream"
    debuild -us -uc -rfakeroot

Install the package:

    # dpkg -i python-radicale_0.10~enrico0-1_all.deb
    # dpkg -i radicale_0.10~enrico0-1_all.deb

Create a system user to run it:

    # adduser --system --disabled-password radicale

Configure it for mod_wsgi with auth done by Apache:

    # For brevity, this is my config file with comments removed

    [storage]
    # Storage backend
    # Value: filesystem | multifilesystem | database | custom
    type = filesystem

    # Folder for storing local collections, created if not present
    filesystem_folder = /var/lib/radicale/collections

    [logging]
    config = /etc/radicale/logging

Create the wsgi file to run it:

    # mkdir /srv/radicale
    # cat <<EOT > /srv/radicale/radicale.wsgi
    import radicale
    radicale.log.start()
    application = radicale.Application()
    EOT
    # chown radicale.radicale /srv/radicale/radicale.wsgi
    # chmod 0755 /srv/radicale/radicale.wsgi

Make radicale commit to git

    # apt-get install python-dulwich
    # cd /var/lib/radicale/collections
    # git init
    # chown radicale.radicale -R /var/lib/radicale/collections/.git

Apache configuration

Add a new site to apache:

    $ cat /etc/apache2/sites-available/cal.conf
    # For brevity, this is my config file with comments removed
    <IfModule mod_ssl.c>
    <VirtualHost *:443>
            ServerName cal.enricozini.org
            ServerAdmin enrico@enricozini.org

            Alias /robots.txt /srv/radicale/robots.txt
            Alias /favicon.ico /srv/radicale/favicon.ico

            WSGIDaemonProcess radicale user=radicale group=radicale threads=1 umask=0027 display-name=%{GROUP}
            WSGIProcessGroup radicale
            WSGIScriptAlias / /srv/radicale/radicale.wsgi

            <Directory /srv/radicale>
                    # WSGIProcessGroup radicale
                    # WSGIApplicationGroup radicale
                    # WSGIPassAuthorization On
                    AllowOverride None
                    Require all granted
            </Directory>

            <Location />
                    AuthType basic
                    AuthName "Enrico's Calendar"
                    AuthBasicProvider file
                    AuthUserFile /usr/local/etc/radicale/htpasswd
                    Require user enrico
            </Location>

            ErrorLog{APACHE_LOG_DIR}/cal-enricozini-org-error.log
            LogLevel warn

            CustomLog{APACHE_LOG_DIR}/cal-enricozini-org-access.log combined

            SSLEngine on
            SSLCertificateFile    /etc/ssl/certs/cal.pem
            SSLCertificateKeyFile /etc/ssl/private/cal.key
    </VirtualHost>
    </IfModule>

Then enable it:

    # a2ensite cal.conf
    # service apache2 reload

Create collections

DAVdroid seems to want to see existing collections on the server, so we create them:

    $ apt-get install cadaver
    $ cat <<EOT > /tmp/empty.ics
    BEGIN:VCALENDAR
    VERSION:2.0
    END:VCALENDAR
    EOT
    $ cat <<EOT > /tmp/empty.vcf
    BEGIN:VCARD
    VERSION:2.1
    END:VCARD
    EOT
    $ cadaver https://cal.enricozini.org
    WARNING: Untrusted server certificate presented for `cal.enricozini.org':
    [...]
    Do you wish to accept the certificate? (y/n) y
    Authentication required for Enrico's Calendar on server `cal.enricozini.org':
    Username: enrico
    Password: ****
    dav:/> cd enrico/contacts.vcf/
    dav:/> put /tmp/empty.vcf
    dav:/> cd ../calendar.ics/
    dav:/> put /tmp/empty.ics
    dav:/enrico/calendar.ics/> ^D
    Connection to `cal.enricozini.org' closed.

DAVdroid configuration

  1. Add a new DAVdroid sync account
  2. Use server/username configuration
  3. For server, use https:////
  4. Add username and password

It should work.

Related links

Posted Tue Dec 9 16:35:50 2014 Tags:

The smell of email

This was written in response to a message with a list of demotivating behaviours in email interactions, like fingerpointing, aggressiveness, resistance when being called out for misbehaving, public humiliation for mistakes, and so on

There are times when I stumble on an instance of the set of things that were mentioned, and I think "ok, today I feel like doing some paid work rather than working on Debian".

If another day I wake up deciding to enjoy working on Debian, which I greatly do, I try and make sure that I can focus on bits of Debian where I don't stumble on any instances of the set of things that were mentioned.

Then I stumble on Gregor's GDAC and I feel like I'd happily lose one day of pay right now, and have fun with Debian.

I feel like Debian is this big open kitchen populated by a lot of people:

  • some dump shit
  • some poke the shit with a stick, contributing to the spread of the smell
  • some carefully clean up the shit, which in the short term still contributes to the smell, but makes things better in the long term
  • some prepare and cook, making a nice smell of food and NOMs
  • some try out the food and tell us how good it was

I have fun cooking and tring out the food. I have fun being around people who cook and try out the food.

The fun in the kitchen seems to be correlated to several things, one of which is that it seems to be inversely proportional to the stink.

I find this metaphore interesting, and I will start thinking about the smell of a mailing list post. I expect it should put posts into perspective, I expect I will develop an instinct for it, so that I won't give a stinky post the same importance of a post that smells of food.

I also expect that the more I learn to tell the smell of food from the smell of shit, the more I can help cleaning it, and the more I can help telling people who repeatedly contribute to the stink to please try cooking instead, or failing that, just try and stay out of the kitchen.

Posted Fri Dec 5 11:51:49 2014 Tags:

C++11 talk examples

On 2014-11-27 I gave a talk about C++ and new features introduced with C++11: these are the examples. They are all licensed under the wtfpli version 2. See cxx11-talk-notes for the talk notes.

Note that the wrapper interfaces turns errors from the underlying libraries into exceptions, so the method calls just do what they should, without the need of documenting special return values for error messages, and removing the need for each library to implement yet another way of reporting errors.

Also note that all wrapper objects do RAII: you create them and they clean after themselves when they go out of scope.

The wrapper objects also have cast operators to make them behave as the pointer or handle that they are wrapping, so that they can be transparently passed to the underlying libraries.

(note: I had to add U+2063 INVISIBLE SEPARATOR to prevent noreturn statements to be misinterpreted by the blog formatter. If you copypaste the code and encounter issues, you may want to delete the noreturn statements and retype them)

A gcrypt hash class

This class is a light wrapper around gcrypt's hashing functions.

ezhash.h

#ifndef EZHASH_H
#define EZHASH_H

#include <string>
#include <gcrypt.h>

namespace ezhash {

class Hash
{
protected:
    // members can now be initialized just like this, without needing to repeat
    // their default assignment in every constructor
    gcry_md_hd_t handle = nullptr;

public:
    Hash(int algo, unsigned int flags=0);
    ~Hash();

    // Assign 'delete' to a method to tell the compiler not to generate it
    // automatically. In this case, we make the object non-copiable.
    Hash(const Hash&) = delete;
    Hash(const Hash&&) = delete;
    Hash& operator=(const Hash&) = delete;

    // Add a buffer to the hash
    void hash_buf(const std::string& buf);

    // Add the contents of a file to the hash
    void hash_file(int fd);

    // Get a string with the hexadecimal hash
    std::string read_hex(int algo=0);

    /// Pretend that we are a gcry_md_hd_t handle
    operator gcry_md_hd_t() { return handle; }
};

}

#endif

ezhash.cpp

#include "ezhash.h"
#include <unistd.h>
#include <errno.h>
#include <string>
#include <cstring>
#include <sstream>
#include <iomanip>
#include <stdexcept>

using namespace std;

namespace ezhash {

namespace {

// noreturn attribute, to tell the compiler that this function never returns
[[noreturn]] void throw_gcrypt_error(gcry_error_t err)
{
    string msg;
    msg += gcry_strsource(err);
    msg += "/";
    msg += gcry_strerror(err);
    throw runtime_error(msg);
}

string errno_str(int error)
{
    char buf[256];
#if (_POSIX_C_SOURCE >= 200112L || _XOPEN_SOURCE >= 600) && ! _GNU_SOURCE
    strerror_r(errno, buf, 256);
    string res(buf);
#else
    string res(strerror_r(errno, buf, 256));
#endif
    return res;
}

[[noreturn]] void throw_libc_error(int error)
{
    throw runtime_error(errno_str(error));
}

}


Hash::Hash(int algo, unsigned int flags)
{
    gcry_error_t err = gcry_md_open(&handle, algo, flags);
    if (err) throw_gcrypt_error(err);
}

Hash::~Hash()
{
    gcry_md_close(handle);
}

void Hash::hash_buf(const std::string& buf)
{
    gcry_md_write(handle, buf.data(), buf.size());
}

void Hash::hash_file(int fd)
{
    char buf[4096];
    while (true)
    {
        ssize_t res = ::read(fd, buf, 4096);
        if (res < 0) ezfs::throw_libc_error();
        if (res == 0) break;
        gcry_md_write(handle, buf, res);
    }
}

std::string Hash::read_hex(int algo)
{
    unsigned char* res = gcry_md_read(handle, algo);

    unsigned int len = gcry_md_get_algo_dlen(
            algo == 0 ? gcry_md_get_algo(handle) : algo);

    // Format the hash into a hex digit
    stringstream hexbuf;
    hexbuf << hex << setfill('0');
    for (unsigned i = 0; i < len; ++i)
        hexbuf << setw(2) << (unsigned)res[i];

    return hexbuf.str();
}

}

Example usage

        ezhash::Hash sha256(GCRY_MD_SHA256);
        sha256.hash_buf("ciao\n");
        sha256.hash_buf("foo\n");
        cout << sha256.read_hex() << endl;

Simple sqlite bindings

Remarkably simple sqlite3 bindings based on lambda callbacks.

ezsqlite.h

#ifndef EZSQLITE_H
#define EZSQLITE_H

#include <sqlite3.h>
#include <string>
#include <functional>
#include <stdexcept>

namespace ezsqlite {

/// RAII wrapper around a sqlite3 database handle
class DB
{
protected:
    sqlite3* handle = nullptr;

public:
    // Open a connection to a SQLite database
    DB(const std::string& filename, int flags=SQLITE_OPEN_READWRITE | SQLITE_OPEN_CREATE);
    DB(const DB&) = delete;
    DB(const DB&&) = delete;
    DB& operator=(const DB&) = delete;
    ~DB();

    /**
     * Execute a query, optionally calling 'callback' on every result row
     *
     * The arguments to callback are:
     *  1. number of columns
     *  2. text values of the columns
     *  3. names of the columns
     */
    // std::function can be used to wrap any callable thing in C++
    // see: http://en.cppreference.com/w/cpp/utility/functional/function
    void exec(const std::string& query, std::function<bool(int, char**, char**)> callback=nullptr);

    /// Pretend that we are a sqlite3 pointer
    operator sqlite3*() { return handle; }
};

}

#endif

ezsqlite.cpp

#include "ezsqlite.h"

namespace ezsqlite {

DB::DB(const std::string& filename, int flags)
{
    int res = sqlite3_open_v2(filename.c_str(), &handle, flags, nullptr);
    if (res != SQLITE_OK)
    {
        // From http://www.sqlite.org/c3ref/open.html
        // Whether or not an error occurs when it is opened, resources
        // associated with the database connection handle should be
        // released by passing it to sqlite3_close() when it is no longer
        // required.
        std::string errmsg(sqlite3_errmsg(handle));
        sqlite3_close(handle);
        throw std::runtime_error(errmsg);
    }
}

DB::~DB()
{
    sqlite3_close(handle);
}

namespace {

// Adapter to have sqlite3_exec call a std::function
int exec_callback(void* data, int columns, char** values, char** names)
{
    std::function<bool(int, char**, char**)> cb = *static_cast<std::function<bool(int, char**, char**)>*>(data);
    return cb(columns, values, names);
}

}

void DB::exec(const std::string& query, std::function<bool(int, char**, char**)> callback)
{
    char* errmsg;
    void* cb = callback ? &callback : nullptr;
    int res = sqlite3_exec(handle, query.c_str(), exec_callback, cb, &errmsg);
    if (res != SQLITE_OK && errmsg)
    {
        // http://www.sqlite.org/c3ref/exec.html
        //
        // If the 5th parameter to sqlite3_exec() is not NULL then any error
        // message is written into memory obtained from sqlite3_malloc() and
        // passed back through the 5th parameter. To avoid memory leaks, the
        // application should invoke sqlite3_free() on error message strings
        // returned through the 5th parameter of of sqlite3_exec() after the
        // error message string is no longer needed. 
        std::string msg(errmsg);
        sqlite3_free(errmsg);
        throw std::runtime_error(errmsg);
    }
}

}

Example usage

    // Connect to the database
    ezsqlite::DB db("erlug.sqlite");

    // Make sure we have a table
    db.exec(R"(
        CREATE TABLE IF NOT EXISTS files (
                name TEXT NOT NULL,
                sha256sum TEXT NOT NULL
        )
    )");

    // Read the list of files that we know
    map<string, string> files;
    db.exec("SELECT name, sha256sum FROM files", [&](int columns, char** vals, char** names) {
        if (columns != 2) return false;
        files.insert(make_pair(vals[0], vals[1]));
        return true;
    });

A fast Directory object

This is a lightweight wrapper around O_PATH file descriptors for directories. I'd love to see a library of well-maintained and thin C++ bindings around libc, that do little more than turning errors into exceptions and making it also work with std::string buffers.

ezfs.h

#ifndef EZFS_H
#define EZFS_H

#include <string>
#include <functional>
#include <memory>
#include <cerrno>
#include <sys/types.h>
#include <sys/stat.h>
#include <fcntl.h>
#include <unistd.h>
#include <dirent.h>

namespace ezfs {

class Directory
{
protected:
    int handle = -1;

public:
    Directory(const std::string& pathname, int flags=0);
    ~Directory();
    Directory(const Directory&) = delete;
    Directory(const Directory&&) = delete;
    Directory& operator=(const Directory&) = delete;

    /// List the directory contents
    void ls(std::function<void(const dirent&)> callback);

    int open(const std::string& relpath, int flags, mode_t mode=0777);
};

std::string errno_str(int error=errno);
[[noreturn]] void throw_libc_error(int error=errno);

}

#endif

ezfs.cpp

#include "ezfs.h"
#include <stdexcept>
#include <memory>
#include <cstring>
#include <cstdlib>
#include <string>
#include <linux/limits.h>

using namespace std;

namespace ezfs {

string errno_str(int error)
{
    char buf[256];
#if (_POSIX_C_SOURCE >= 200112L || _XOPEN_SOURCE >= 600) && ! _GNU_SOURCE
    strerror_r(errno, buf, 256);
    string res(buf);
#else
    string res(strerror_r(errno, buf, 256));
#endif
    return res;
}

[[noreturn]] void throw_libc_error(int error)
{
    throw runtime_error(errno_str(error));
}

Directory::Directory(const std::string& pathname, int flags)
{
    handle = ::open(pathname.c_str(), O_PATH | O_DIRECTORY | flags);
    if (handle < 0) throw_libc_error();
}

Directory::~Directory()
{
    ::close(handle);
}

void Directory::ls(std::function<void(const dirent&)> callback)
{
    int fd = openat(handle, ".", O_DIRECTORY);
    if (fd < 0) throw_libc_error();

    // RAII Self-cleaning DIR object
    unique_ptr<DIR, std::function<void(DIR*)>> dir(fdopendir(fd), [](DIR* dir) { if (dir) closedir(dir); });
    if (!dir)
    {
        // fdopendir(3): After a successful call to fdopendir(), fd is used
        // internally by the implementation, and should not otherwise be used
        // by the application.
        //
        // but if the fdopendir call was not successful, fd is not managed by
        // DIR, and we still need to close it, otherwise we leak a file
        // descriptor.
        //
        // However, close() may modify errno, so we take note of the errno set
        // by fdopendir and raise the exception based on that.
        int fdopendir_errno = errno;
        close(fd);
        throw_libc_error(fdopendir_errno);
    }

    // Size the dirent buffer properly
    const unsigned len = offsetof(dirent, d_name) + PATH_MAX + 1;
    unique_ptr<dirent, std::function<void(void*)>> dirbuf((dirent*)malloc(len), free);

    while (true)
    {
        dirent* res;
        int err = readdir_r(dir.get(), dirbuf.get(), &res);

        // End of directory contents
        if (err == 0)
        {
            if (res)
                callback(*res);
            else
                break;
        } else
            throw_libc_error(err);
    }
}

int Directory::open(const std::string& relpath, int flags, mode_t mode)
{
    int res = openat(handle, relpath.c_str(), flags, mode);
    if (res < 0) throw_libc_error();
    return res;
}

}

Example usage

        // This is almost the equivalent of running "sha256sum ."
        ezfs::Directory dir(".");
        dir.ls([&](const dirent& d) {
            if (d.d_type != DT_REG) return;

            ezhash::Hash sha256(GCRY_MD_SHA256);
            // I have no RAII wrapper around file handles at the moment, so
            // I'll have to use a try/catch for cleaning up after errors
            int fd = dir.open(d.d_name, O_RDONLY);
            try {
                sha256.hash_file(fd);
                close(fd);
            } catch (...) {
                close(fd);
                throw;
            }

            cout << sha256.read_hex() << "  " << d.d_name << endl;
        });
Posted Sun Nov 30 19:26:26 2014 Tags:

C++11 talk notes

On 2014-11-27 I gave a talk about C++ and new features introduced with C++11: these are the talk notes. See cxx11-talk-examples for the examples.

(note: I had to add U+2063 INVISIBLE SEPARATOR to prevent noreturn statements to be misinterpreted by the blog formatter. If you copypaste the code and encounter issues, you may want to delete the noreturn statements and retype them)

Overview of programming languages

It has to be as fast as possible, so interpreted languages are out.

You don't want to micro manage memory, so C is out.

You don't want to require programmers to have a degree, so C++ is out.

You want fast startup and not depend on a big runtime, so Java is out.

[...]

(Bram Moolenaar)

C++ secret cultist protip

Do not call up what you cannot put down.

C++ is a compiled language

It is now possible to use the keyword constexpr to mark functions and objects that can be used at compile time:

/*
 * constexpr tells the compiler that a variable or function can be evaluated at
 * compile time.
 *
 * constexpr functions can also be run at run time, if they are called with
 * values not known at compile time.
 *
 * See http://en.cppreference.com/w/cpp/language/constexpr for more nice examples
 *
 * It can be used to avoid using constants in code, and using instead functions
 * for computing hardware bitfields or physical values, without losing in
 * efficiency.
 */

#include <iostream>

using namespace std;

constexpr int factorial(int n)
{
    return n <= 1 ? 1 : (n * factorial(n-1));
}

int main()
{
    cout << "Compile time factorial of 6: " << factorial(6) << endl;

    cout << "Enter a number: ";
    int a;
    cin >> a;

    cout << "Run time factorial of " << a << ": " << factorial(a) << endl;
}

See also this for more nice examples. See this and this for further discussion.

Multiline strings

        const char* code = R"--(
          printf("foo\tbar\n");
          return 0;
        )--";

See this.

C++ memory management protip

RAII: Resource Acquisition Is Instantiation

This is not new in C++11, but in my experience I have rarely seen it mentioned in C++ learning material, and it does make a huge difference in my code.

See this and this for details.

Constructors and member initializer lists

Initializers in curly braces now have their own type: std::initializer_list:

#include <string>
#include <iostream>
#include <unordered_set>

using namespace std;

// std::initializer_list<…>
//   will have as its value all the elements inside the curly braces

string join(initializer_list<string> strings)
{
    string res;
    for (auto str: strings)
    {
        if (!res.empty())
            res += ", ";
        res += str;
    }
    return res;
}

int main()
{
    unordered_set<string> blacklist{ ".", "..", ".git", ".gitignore" };

    cout << join({ "antani", "blinda" }) << endl;
}

See this for details, including the new uniform initialization trick of omitting parentesis in constructors so that you can call normal constructors and initializer_list constructors with the same syntax, which looks like an interesting thing when writing generic code in templates.

Type inference

I can now use auto instead of a type to let the compiler automatically compute the value of something I assign to:

        auto i = 3 + 2;

        // See also https://github.com/esseks/monicelli
        vector<string> names{ "antani", "blinda", "supercazzola" };
        for (auto i = names.cbegin(); i != names.cend(); ++i)
            cout << i;

        template<typename T>
        T frobble(const T& stuff)
        {
             // This will work whatever type is returned by stuff.read()
             auto i = stuff.read();
             // …
        }

See this for more details.

Range-based for loop

C++ now has an equivalent of the various foreach constructs found in several interpreted languages!

        for (auto i: list_of_stuff)
                cout << i << endl;

        for (auto n: {0,1,2,3,4,5})
                cout << n << endl;

        // This construct:
        for (auto i: stuff)

        // If stuff is an array, it becomes:
        for (i = stuff, i < stuff + sizeof(stuff) / sizeof(stuff[0]); ++i)

        // If stuff has .begin() and .end() methods it becomes:
        for (i = stuff.begin(); i != stuff.end(); ++i)

        // Otherwise it becomes:
        for (i = begin(stuff); i != end(stuff); ++i)

        // And you can define begin() and end() functions for any type you
        // want, at any time

See this and this for details.

Lambda functions and expressions

Lambdas! Closures!

Something like this:

// JavaScript
var add = function(a, b) { return a + b; }
# Python
add = lambda a, b: a + b

Becomes this:

auto add = [](int a, int b) { return a + b; }

And something like this:

// JavaScript
var a = 0;
$.each([1, 2, 3, 4], function(idx, el) { a += el });

Becomes this:

unsigned a = 0;
std::for_each({ 1, 2, 3, 4 }, [&a](int el) { return a += el; });

See this, this and this.

Tuple types

C++ now has a std::tuple type, that like in Python can be used to implement functions that return multiple values:

        tuple<int, string, vector<string>> parse_stuff()
        {
                return make_tuple(id, name, values);
        }

        string name; vector<string> values;

        // std::ignore can be used to throw away a result
        tie(ignore, name, values) = parse_stuff();

        // std::tie can also be used to do other kind of
        // multi-operations besides assignment:
        return tie(a, b, c) < tie(a1, b1, c1);
        // Is the same as:
        if (a != a1) return a < a1;
        if (b != b1) return b < b1;
        return c < c1;

See here, here and here.

Regular expressions

We now have regular expressions!

        std::regex re(R"((\w+)\s+(\w+))");
        string s("antani blinda");
        smatch res;

        if (regex_match(s, res, re))
            cout << "OK " << res[1] << " -- " << res[2] << endl;

The syntax is ECMAScript by default and can be optionally changed to basic, extended, awk, grep, or egrep.

See here and here.

General-purpose smart pointers

There is std::unique_ptr to code memory ownership explicitly, and std::shared_ptr as a reference counted pointer, and smart pointers can have custom destructors:

    unique_ptr<dirent, std::function<void(void*)>> dirbuf((dirent*)malloc(len), free);

See here and here.

Miscellaneous other cool things

Standard attribute specifiers

string errno_str(int error)
{
    char buf[256];
#if (_POSIX_C_SOURCE >= 200112L || _XOPEN_SOURCE >= 600) && ! _GNU_SOURCE
    strerror_r(errno, buf, 256);
    string res(buf);
#else
    string res(strerror_r(errno, buf, 256));
#endif
    return res;
}

[[noreturn]] void throw_libc_error(int error)
{
    throw runtime_error(errno_str(error));
}

See here.

Hash tables

See here and look at the new containers unordered_set, unordered_map, unordered_multiset, and unordered_multimap.

Multithreading

There is a standard threading model, with quite a bit of library support: see here, here, here, and here for atomic data structures.

Variadic templates

Templates can now take variable number of arguments, and that opens possibilities for interesting code generation, like implementing a generic, type-safe printf statement, or something like this:

db.query(R"(
   INSERT INTO table NAMES (id, name, description)
     VALUES (?, ?, ?)
)", 4, "genio", "fantasia, intuizione, decisione, e velocità di esecuzione");

See here and here.

Essential tools

You need at least g++ 4.8 or clang 3.3 to have full C++11 support. They will be both available in jessie, and for wheezy you can use the nightly clang packages repository. I cannot think of a good excuse not to use -Wall on new code.

scan-build from clang is another nice resource for catching even more potential problems at compile time.

valgrind is a great tool for runtime code analysis: valgrind --tool=memcheck (the default) will check your program for wrong memory accesses and memory leaks. valgrind --tool=callgrind will trace function calls for profiling, to be analyzed with kcachegrind. valgrind --tool=helgrind can check multi-threaded programs for suspicious concurrent memory accesse patterns.

And of course gdb: a nice trick with C++ is to issue catch throw to get a breakpoint at the point where an exception is being thrown. help catch provides a list of other interesting catch examples.

Coredump tips: ulimit -c to enable core dumps, triggering a core dump with ^\, opening a core with gdb program core, and more details on man 5 core.

An extra gdb tip, which is not related to C++ but helped me considerably recently, is that it can be attached to running python programs to get a live Python traceback.

Posted Sun Nov 30 18:52:14 2014 Tags:

Calypso and DAVDroid

calypso and DAVdroid appeal to me. Let's try to make the whole thing work.

Update: radicale seems to also support git as a backend, and I plan to give it a try, too.

A self-signed SSL certificate

Generating the certificate:

$ openssl req -nodes -x509 -newkey rsa:2048 -keyout cal-key.pem -out cal-cert.pem -days 3650
[...]
Country Name (2 letter code) [AU]:IT
State or Province Name (full name) [Some-State]:Bologna
Locality Name (eg, city) []:
Organization Name (eg, company) [Internet Widgits Pty Ltd]:enricozini.org
Organizational Unit Name (eg, section) []:
Common Name (e.g. server FQDN or YOUR name) []:cal.enricozini.org
Email Address []:postmaster@enricozini.org

Installing it on my phone:

$ openssl x509 -in cal-cert.pem -outform DER -out cal-cert.crt
$ adb push cal-cert.crt /mnt/sdcard/
$ enrico --follow-instructions http://davdroid.bitfire.at/faq/entry/importing-a-certificate

Installing calypso in my VPS

An updated calypso package:

$ git clone git://keithp.com/git/calypso
$ git checkout debian -b enrico
$ git remote add chrysn  git://prometheus.amsuess.com/calypso-patches
$ git fetch chrysn
$ git merge chrysn/chrysn/integration
$ dch -v 1.4+enrico  "Merged with chrysn integration branch"
$ debuild -us -uc -rfakeroot

Install the package:

# dpkg -i calypso_1.4+enrico_all.deb

Create a system user to run it:

# adduser --system --disabled-password calypso
# chsh calypso  # /bin/dash

Make it run at boot time (based on calypso-init from the git repo):

# cat /etc/default/calypso
CALYPSO_OPTS="-d -P $PIDFILE"
# diff -Nau calypso-init calypso-init.enrico
--- calypso-init        2014-11-26 11:50:35.301001194 +0100
+++ calypso-init.enrico 2014-11-26 12:18:16.564138554 +0100
@@ -62,8 +62,8 @@
        || return 1

    mkdir -p $(dirname $PIDFILE)
-       chown calypso:calypso $(dirname $PIDFILE)
-       start-stop-daemon --start -c $NAME --quiet --pidfile $PIDFILE --exec $DAEMON -- \
+       chown calypso:nogroup $(dirname $PIDFILE)
+       start-stop-daemon --start -c $NAME:nogroup --quiet --pidfile $PIDFILE --exec $DAEMON -- \
        $CALYPSO_OPTS \
        || return 2
    # Add code here, if necessary, that waits for the process to be ready
# cp calypso-init.enrico /etc/init.d/calypso
# update-rc.d calypso defaults

Setting up the database

# su - calypso

Certificates and server setup:

$ mkdir .config/calypso/certs
$ mv cal-key.pem .config/calypso/certs/cal.key
$ mv cal-cert.pem .config/calypso/certs/cal.pem
$ chmod 0600 .config/calypso/certs/*
$ cat > .config/calypso/config << EOF
[server]
certificate=/home/calypso/.config/calypso/certs/cal.pem
key=/home/calypso/.config/calypso/certs/cal.key

[acl]
type=htpasswd
encryption=sha1
filename=/home/calypso/.config/calypso/htpasswd
    EOF

User passwords:

    $ htpasswd -s .config/calypso/htpasswd enrico

Database initialization:

$ mkdir -p .config/calypso/calendars
$ cd .config/calypso/calendars
$ git init
    $ cat > .calypso-collection << EOF
[collection]
is-calendar = True
is-addressbook = False
displayname = Test
description = Test calendar
EOF
    $ git add .calypso-collection
$ git commit --allow-empty -m'initialize new calendar'

Start the server

# /etc/init.d/calypso start

DAVdroid configuration

  1. Add a new DAVdroid sync account
  2. Use server/username configuration
  3. For server, use https://:5233
  4. Add username and password

It should work.

Related links

Posted Wed Nov 26 12:38:22 2014 Tags:

A mock webserver to use for unit testing HTTP clients

With python -m SimpleHTTPServer it's easy to bring up an HTTP server to use to test HTTP client code, however it only supports GET requests, and I needed to test an HTTP client that needs to perform a file upload.

It took way more than I originally expected to put this together, so here it is, hopefully saving other people (including future me) some time:

#!/usr/bin/python3

import http.server
import cgi
import socketserver
import hashlib
import json

PORT = 8081

class Handler(http.server.SimpleHTTPRequestHandler):
    def do_POST(self):
        info = {
            "method": "POST",
            "headers": { k: v for k, v in self.headers.items() },
        }

        # From https://snipt.net/raw/f8ef141069c3e7ac7e0134c6b58c25bf/?nice
        form = cgi.FieldStorage(
            fp=self.rfile,
            headers=self.headers,
            environ={'REQUEST_METHOD':'POST',
                     'CONTENT_TYPE':self.headers['Content-Type'],
                     })

        postdata = {}
        for k in form.keys():
            if form[k].file:
                buf = form.getvalue(k)
                postdata[k] = {
                    "type": "file",
                    "name": form[k].filename,
                    "size": len(buf),
                    # json.dumps will not serialize a byte() object, so we
                    # return the shasum instead of the file body
                    "sha256": hashlib.sha256(buf).hexdigest(),
                }
            else:
                vals = form.getlist(k)
                if len(vals) == 1:
                    postdata[k] = {
                        "type": "field",
                        "val": vals[0],
                    }
                else:
                    postdata[k] = {
                        "type": "multifield",
                        "vals": vals,
                    }

        info["postdata"] = postdata

        resbody = json.dumps(info, indent=1)
        print(resbody)

        resbody = resbody.encode("utf-8")

        self.send_response(200)
        self.send_header("Content-type", "application/json")
        self.send_header("Content-Length", str(len(resbody)))
        self.end_headers()

        self.wfile.write(resbody)

class TCPServer(socketserver.TCPServer):
    # Allow to restart the mock server without needing to wait for the socket
    # to end TIME_WAIT: we only listen locally, and we may restart often in
    # some workflows
    allow_reuse_address = True

httpd = TCPServer(("", PORT), Handler)

print("serving at port", PORT)
httpd.serve_forever()
Posted Tue Nov 25 18:22:41 2014 Tags:

Fun and Sanity in Debian

A friend of mine recently asked: "is there anything happening in Debian besides systemd?"

Of course there is. He asked it 2 days after the freeze, which happened in time, and with an amazingly low RC bug count.

The most visible thing right now seems to be this endless init system argument, but there are fun and sane things in Debian. Many of them.

I think someone should put the spotlight on them, and here's my attempt.

Yesterday I set up a gobby document asking "What is now happening in Debian that is exciting, fun and sane?", and passed the link around the Cambridge Miniconf and some IRC channels.

Here are a few quotations that I collected:

The armhf and arm64 ports have for me been wonderful and exciting, and were a great time for me to start getting involved. (Jon "Aardvark" Ward)

We have a way of tracking random contributors, and as far as I know no other project has anything like it. (Enrico Zini)

codesearch.debian.net is an incredibly important resource, not just for us but for the free software community at large. (Ben Hutchings)

sources.debian.net is a very useful resource with lots of interested contributors, it received 10 OPW applicants (Stefano Zacchiroli)

It has never been easier to work on new infrastructure project thanks to the awesome work of the DSA team. We have dozens of contribution opportunities outside of just plain packaging. (Raphaël Hertzog)

The work on reproducible builds has achieved excellent results with 61.3% of packages being reproducible. (Paul Wise)

Porting arm64 has been (peversely) great fun. It's remarkably morish and I like nothing more than a tedious argument with autoconf macros. Working with lots of enthusiastic people from other teams, helping getting the port set up and build has been great - thank you everybody. (Wookey)

And here are random exciting things that were listed:

  • build-profile support (for bootstrapping) is all in jessie (dpkg, apt, sbuild, python-apt, debhelper, libconfig-model-dpkg-perl, lintian).
  • PointCloudLibrary (PCL) got migrated from Ubuntu to Debian
  • Long Term Support has arrived!
  • http://ci.debian.net
  • Debian is participating for the second time in OPW as mentor orga
  • ftp-master is getting an API
  • cross-toolchains for jessie are available
  • arm64/ppc64el ready to go into jessie
  • wheezy-backports is more useful and used than ever
  • we froze, in time, with a remarkably low RC bug count, and we have a concrete plan for getting from that to a release
Posted Sun Nov 9 16:10:48 2014 Tags:

cryptsetup password and parallel boot

Since parallel boot happened, during boot the cryptsetup password prompt in my system gets flooded with other boot messages.

I fixed it, as suggested in #764555, installing plymouth, then editing /etc/default/grub to add splash to GRUB_CMDLINE_LINUX_DEFAULT:

GRUB_CMDLINE_LINUX_DEFAULT="quiet splash"

Besides showing pretty pictures (and most importantly, getting them out of my way if I press ESC), plymouth also provides a user prompt that works with parallel boot which sounds like what I needed.

Posted Fri Oct 24 10:20:22 2014 Tags:
Posted Sat Jun 6 00:57:39 2009

Bologna.

Pasto criminale

Si era in coda nella circonvallazione di San Giovanni in Persiceto per andare a una visita chirurgica all'USL di Crevalcore, quando si è notato sulla destra questo sbarazzino ristorante con un po' di patacche sulla porta da qualche guida, tra cui pure la Michelin: http://www.ristorantegiardinetto.it/

Tornati a casa, abbiamo googlato e chiesto a un po' di gente di San Giovanni, e nessuno lo conosceva. Una persona sana di mente avrebbe detto: "se la gente di San Giovanni non lo conosce, ci sarà un motivo". Noi ci siamo invece detti "se non lo conosce nessuno, andiamo a vedere".

La verità è: "Se la gente di San Giovanni va invece al Bertoldo, un motivo ci sarà".

Siamo entrati ottimisti. Menú tipico Bolognese, tortellini, passatelli, tortelloni di ricotta, bolliti. Piú "Menú del giorno" dedicato alla Sicilia con Pasta alla Norma e cose cosí. Bello.

Primi ordinati: Pasta alla Norma, Tortellini in Brodo.

Come secondi: Misto di verdure arrosto, Braciola di maiale. Avrei voluto lo zampone dal carrello dei bolliti, ma siccome è ancora troppo caldo i bolliti non li hanno, per cui ho ripiegato sulla braciola.

Dopo che il cameriere è andato via con l'ordine, mi è venuto il terribile dubbio: "ma se non siamo in stagione per i bolliti, con cosa lo fanno il brodo?".

Arrivano i primi. Dopo un'accurata ricerca che ci ha visti coinvolti entrambi abbiamo trovato una minuscola striciolina di melanzana in mezzo agli spaghetti, che ci ha confermato che effettivamente erano gli spaghetti alla Norma che avevamo ordinato e non un ordine sbagliato di spaghetti al sugo con aggiunta di unto.

I tortellini invece erano fatti a mano, pure piccoli (la sfoglina era brava), e se si scolava via con cura il brodo erano anche buoni.

Finiti i primi, il cameriere ha chiesto "tutto bene?" e io, candidamente ho detto "buoni i tortellini, peccato per il brodo". Non l'ha presa bene. Ha malcelatamente represso un attacco d'ira ed è scappato con la scusa di andarlo a dire al cuoco. È tornato poco dopo con una supercazzola che aveva a che fare col fondo della pentola, alla quale non abbiamo dato molto peso. Ha avuto il buon gusto di non chiedere com'erano gli "spaghetti alla Norma".

Arrivano i secondi. Verdure arrostite ustionanti dentro, che solo il microonde le sa scaldare cosí bene, e farcite di pangrattato intriso in sgradevole unto. In compenso, braciola di maiale talmente secca che solo a guardarla mi si sono asciugate anche le lacrime dagli occhi.

La braciola ho provato a campionarla in vari punti ma alla fine l'ho dovuta lasciare nel piatto. Ho poi aiutato a finire le verdure ai grassi saturi, che in confronto con la braciola erano di-vi-ne.

Quando il cameriere è arrivato a portar via i piatti, un po' rassegnato ha chiesto "non andava bene la braciola?". Ho risposto "sta scherzando?". Mi ha detto "se vuole gliela faccio rifare". Ho detto "grazie, ma è meglio di no". Ha insistito. Ho insistito che portasse il conto.

Il conto ci ha messo un po' ad arrivare. Alla fine ci hanno fatto pagare tutto a prezzo pieno nonostante le lamentele e la braciola lasciata lí. Dopo vado a cercare l'email del feedback della guida Michelin.

Il conto:

  • 2 Coperto e pane: 5€ (il costo di una pizza marinara)
  • 2 Minestra: 19€ (8€ spaghetti conditi con pomodoro e unto strizzato delle melanzane al forno; 11€ tortellini buoni devastati da un brodo indegno)
  • 2 Pietanza: 20€ (10€ verdure al gratin di grassi saturi e 10€ braciola di ieri scaldata col fòn)
  • 1/2 Sangiovese della casa: 5€ (stica)
  • 1l Acqua gasata: 2.50€
  • Totale: 51.50€

Bagno senza carta igienica, con l'asse sconnesso.

Fossimo stati in centro a Venezia il mese d'Agosto, uno lo dà per posto truffa per turisti. Ma nella Bassa, non saper cuocere una braciola, è criminale.

Volevo chiamare il 112: "pronto, vorrei la polizia. Buonasera. Sono in un ristorante a San Giovanni, Circ.ne Italia, 20. Si. Per favore venite subito. Si. Stanno insultando il buon nome del MAIALE. Si, nella Bassa. San Giovanni in Persiceto, ha capito bene. Ok, bene, vi aspetto."

Posted Thu Sep 17 23:59:44 2009 Tags:

Allucinazioni

Bologna, 14 dicembre 2005.

Stamattina andavo a Bologna in macchina (capita raramente, ma odio quando capita).

Al primo maggio tiro dritto per andare in via Colombo e la coda dell'occhio tira su un cartellone dei saldi da un capannone sulla destra.

C'è qualcosa di strano. Guardo meglio.

"Niente piú telefonate! Svendita totale per cambio gestione"

Sorrido: il capannone era "Il Mobile di Castel Maggiore".

Posted Sat Jun 6 00:57:39 2009 Tags:

Cena di alta classe

Interessante cena di alta classe al ristorante Il Sole di Trebbo di Reno:

[...] tra gli chef di nuova generazione, i più apprezzati, hanno mutato nel corso degli anni la cucina abbinando i piatti della tradizione regionale italiana all'elevata aspirazione alla ricerca, alla sperimentazione mai eccessiva, all'intuito per le novità, facendo dell'equilibrio tra storia e innovazione culinaria l'asso nella manica di questa locanda.

Non ho linkato il loro sito perché è un malvagio monoblocco in Flash e per punizione è introvabile in Google. Si può trovare la loro e-mail, a cui ho scritto per chiedere del menú di capodanno, ma non hanno mai risposto.

Ci andai coi miei vari anni fa e ci lasciò un buon ricordo, quindi ci son tornato con morosa a vedere se ne vale ancora la pena.

Abbiamo mangiato (vado a memoria e non sono descrizioni esatte):

Primi

Io: Cappellacci di ricciola al sapore di funghi con gelato di tartufo nero.

Lei: Zuppa di riso con tortelli di spigola, cime di rapa e odore di arancio.

Secondi

Una porzione in due di: Branzino con scampi arrostiti.

Dolce

Millefoglie allo stracchino con sale affumicato e salsa di kaki.

Vini

Pinot di San Michele Appiano, purtroppo non ricordo la cantina.

OLIVARES Dulce Monastrell spagnolo. Abbiamo chiesto al cuoco quale regione della Spagna, ma non è mai tornato indietro a dircelo. Google compensa: Murcia.

Commenti

In attesa del primo

Mini porzione di ricciola cruda con limone caramellato, paté di olive taggiasca e pappa al pomodoro. Buone le varie parti anche se non formavano un insieme: un assaggino non stupefacente, ma interessante.

Cappellacci di ricciola al sapore di funghi con gelato di tartufo nero

La pasta dei cappellacci non era male, anche se il ripieno mancava o di sale o di sapore, e per percepirlo bisognava prestare molta attenzione.

I funghi, erano amari come il veleno.

Il gelato di tartufo nero, in onore al tartufo nero del tartufo aveva l'odore ma non il sapore. Il sapore era un vago dolcino.

Il sapore che è rimasto in bocca era lo spiacevole amaro dei funghi.

Zuppa di riso con tortelli di spigola, cime di rapa e odore di arancio

La zuppa aveva una consistenza vinilica e un sapore assente. Al primo assaggio il pensiero di entrambi è stato: "si offendono se chiedo del sale?".

I tortelli avevano una buona pasta, il ripieno era percettibile ma l'odore di pesce non era contrastato piú di tanto dall'arancio (che ci stava benino) ma dalle cime di rapa, sgradevolmente amare.

Il sapore che è rimasto in bocca era lo sgradevole amaro delle cime di rapa.

Branzino con scampi arrostiti

Al primo assaggio il branzino non era male, ma al secondo assaggio non diceva piú molto: un po' di sale io ce lo avrei messo. Lo scampo era buono, ma era servito in una specie di lana di patata nera fritta che sapeva di patatine fritte e ne copriva quasi interamente il sapore.

In attesa del dolce

Pallina di gelato alla fragola con sopra una fragola.

Il gelato era fatto con fragole buone, cosa che ho apprezzato, ma a parte gli ingredienti la struttura non ci ha stupiti: a Bologna ci si è abituati a standard di gelato molto alti.

In uno dei due piattini abbiamo trovato un corpo estraneo che stiamo ancora cercando di convincerci non fosse un'unghia. Mi pento di non aver chiamato il cameriere per spiegazioni, è l'abitudine da viaggiatore ad avere poche pippe per questo genere di cose.

Millefoglie allo stracchino con sale affumicato e salsa di kaki

Decisamente il top della serata, verrà ricordato per lungo tempo.

La sfoglia del millefoglie era un capolavoro: ottima da tutti i punti di vista: veramente un piacere.

A romperla col cucchiaio e a sentirla sotto i denti la bocca si aspetta, esige di trovarci in mezzo qualcosa di altrettanto grandioso, e ci rimane male quando ci trova, ebbene sí, dello stracchino che sa di stracchino.

Delusione. Ma non è finita, perché poi arriva il sale affumicato, e a quel punto si entra in un nuovo mondo. Giuro, non ho mai provato nulla di cosí creativamente rivoltante. Mi sono sforzato di finirlo per trovargli un senso, ma niente: se ci ripenso mi viene la nausea. È la prima volta che la parola migliore che trovo per descrivere un dessert è offensivo.

La salsa di cachi che decorava il piatto non era male, ma non poteva che assistere impotente alla tragedia.

Lo stomaco si sta ancora lamentando: non per la difficoltà di digestione, ma per la rabbia del vedere varie parti tutte di alta qualità unite nello scempio di una cacofonia cosí disgustosa.

È la prima volta che il mio stomaco non vuol digerire perché si sente preso in giro.

Dopo il dolce

Un assaggio di dolcetti piú o meno interessanti, su cui spiccava una ciotolina di crema che per entrambi aveva una netta dominante di cloro sotto la quale il sapore di uovo non era neanche male.

Servizio

  • Gli altri tavoli hanno avuto piú assaggetti di pane di noi: a noi solo dei grissini che a me sono parsi un po' vecchi e a lei troppo unti.
  • Vorrei vedere l'etichetta sulla bottiglia mentre mi si fa assaggiare il vino: e invece, era girata dall'altra parte.
  • Mi va bene se uno ha il menú in inglese ma non parla inglese, ma almeno voglio che mi si dia il tempo di tradurre. Non tollero di vedere una faccia irritata se chiedo al cameriere una pausa per tradurre.
  • Se ti chiedo da che parte della Spagna viene il vino da dessert e sei il cuoco che me lo ha suggerito, mi aspetto che tu lo sappia. Se mi dici che non lo sai e che vai a vedere, poi torna anche indietro a dirmelo.

Gli altri clienti

Uno dice magari non ci hanno presi sul serio perché venivamo da un pomeriggio a passeggio per il centro ed eravamo in jeans: come stile sarebbe da migliorare.

Però... però noi non andiamo a fumare in bagno (che poi puzza!) e dopo aver usato il mini asciugamanino di tela monouso lo riponiamo nella cesta degli asciugamanini di tela monouso usati, invece di ripiegarlo e rimetterlo bagnato nella pila di quelli puliti.

Prezzo

Cose che si possono fare allo stesso prezzo:

  • Una cena da Buriani o al Dolce e Salato.
  • Una cena a base di tartufo bianco alla trattoria La Rosa a Sant'Agostino.
  • Una signora mangiata di pesce per due persone alla Giara ad Altedo.
  • Pranzo a base di tartufo bianco a Cà Gabrielli al Corno alle Scale e giornata sulla neve, incluso benzina per il viaggio, skipass e nolo sci.

Questa entry di blog è stata scritta per rappacificarmi col mio stomaco, che prima di iniziare la digestione ha richiesto una chiara presa di posizione.

Ergo, chiara presa di posizione: stasera abbiamo mangiato male.

Vediamo se ora il mio stomaco mi permette di dormire.

Posted Sat Jun 6 00:57:39 2009 Tags:

Igiene pubblica

Stamattina:

  • Andare in via Gramsci 12
  • Andare alla porta 10 piano terra
  • Prendere il numero bianco con bordo blu
  • Aspettare (due ore)
  • Fare vaccino
  • Andare alla porta 22 piano terra
  • Pagare
  • Tornare alla porta 10 piano terra
  • Consegnare la ricevuta
  • Ritirare il certificato

Mi sono sentito come la tartaruga del logo.

Posted Sat Jun 6 00:57:39 2009 Tags:

Note sul talk del LinuxDay

Strumenti avanzati per il cazzeggio

Dal Linux Day 2005 di Bologna_.

Debian GNU/Linux è un sistema stabile, sicuro, completo e che risolve qualsiasi tipo di esigenza. Ma quando tutte le esigenze sono risolte? Quando tutti i bisogni sono appagati? Quando tutti i nostri sistemi funzionano affidabili e non possiamo neanche sentirci impegnati supervisionando il pirolare di un defrag? Ci servono maniere creative, geniali, totalmente inutili per perdere il nostro tempo. Fortunatamente, in questo talk ne vedremo un bel po'.

Questi sono gli appunti che ho usato per il talk. Frammentari, ma dovrebbero dare l'idea. Alla pagina del Linux Day 2005 di Bologna si trovano le registrazioni audio e anche quelle video quando saranno pronte.

Introduzione

Definizione

Lo definirei come una maniera di impiegare il tempo che sia creativa, ma soprattutto inutile. Inutile almeno secondo i canoni del rituale corrente e di massa della società, che impongono che le uniche cose utili sono quelle che richiedono ansia e fatica.

Il cazzeggio nella storia

Le piramidi.

Il cazzeggio nella letteratura

Il cazzeggio nell'arte

  • I Dadaisti (di nuovo)
  • Piero Manzoni
  • Marti Guixé

Nella scienza

  • I premi Ignobel

Parte 1: Software a linea di comando

Tool di base

  • sl
  • an (e poi sbagliarsi con man)
  • tama
  • vigor
  • an, wordplay
  • sysvbanner
  • dpkg -L bsdgames | grep /usr/games
    • bcd, ppt, morse
    • countmail
    • hangman
    • number
    • pig (man pig)
    • pom
    • quiz
    • random
    • wargames
  • robotfindskitten
  • fortune

Tool avanzati

Comandi:

dpkg -L filters
formail -I "" -s < .mail/debian-legal | dadadodo -
polygen bloccotraffico | lynx -dump -stdin
polygen uforobot | lynx -dump -stdin | grep -v '^$' | cowsay
for i in *.cow; do echo $i | cowsay -f `basename $i .cow`; done | less
polygen pythoniser | fmt | b1ff | cowsay -f eyes
polygen -X 50 unieuro | dadadodo - | festival --tts --language italian
polygen screensaver
randtype
bogosort

Parte 2: Software grafici

Tool di base

  • cappuccino
  • Orologi
    • sunclock, daliclock, xarclock -update 1
    • xearth
    • xplanet con gadgets
  • xteddy
  • kodo
  • xdesktopwaves
  • xlaby
  • xlaby + kodo

Tool avanzati

xscreensaver, xscreensaver-gl, rss-glx:

/usr/lib/xscreensaver/noseguy -program "polygen unieuro"
mkfifo pippo
tail -f pippo | festival --tts --language italian
/usr/lib/xscreensaver/noseguy -program "polygen unieuro|tee /home/enrico/pippo |fmt"
ll /usr/lib/xscreensaver
phosphor -program bash
phosphor -program ’xscreensaver-text | tee /dev/stderr | festival --tts’
(come si velocizza?)
phosphor -program 'polygen -X 50 unieuro | dadadodo - | tee /dev/stderr | festival --tts --language italian'
matrixview

Parte 3: Cazzeggio con strumenti seri

  • Quasi seri
    • debtags search game::toys
    • xtartan -list
    • gdesklets gkrellm
  • Seri
    • mappa caratteri + ctrl-shift
    • guppi
    • LDAP -- GEEZ! Multisync can do LDAP synchronization! I could add a new user in my mobile phone and have a UNIX account automatically created for it! :)
    • graphviz

Conclusione

  • xfs_fsr
Posted Sat Jun 6 00:57:39 2009 Tags:

Linux Day 2006

Fico!

L'anno scorso siamo finiti sui giornali, quest'anno addirittura al TG1, e con un ottimo servizio, pure.

Belli i talk a Bologna, che hanno avuto un taglio meno da programmatori e piú da creativi: è stato molto bello vedere come lavorano con Linux un musicista e un fotografo.

Mezzo fallito invece il LIP (Linux Installation Party), che ha visto pochi partecipanti. La spiegazione che va per la maggiore è che ormai non serve piú un gran aiuto per installare Linux, e salvo in quei computer che richiedono riti voodoo e santini di Sgala, le distribuzioni moderne van su da sole.

Mi piacciono i talk creativi: in Venezuela ho visto un talk fatto da un professionista di Blender del progetto Plumiferos: spettacolare!

Spettacolare allo stesso modo vedere ieri Daniele usare Ardour e un mare di altri sintetizzatori, effetti e periferiche Midi.

Obbligatoria nota cazzeggio: durante la cena abbiamo creato una grammatica polygen per generare nomi di ditte di informatica italiane. Per esempio:

  • Caccygen
  • Pulycyd
  • Trombygym
  • Moltigen
  • Polysperm
  • Poltijam
  • Pulyben

A quando il prossimo talk di sera?

Aggiunta: articolo su Linux.com.

Posted Sat Jun 6 00:57:39 2009 Tags:
Posted Sat Jun 6 00:57:39 2009
eng

Pages in English.

HTML5 Server-sent events

I have a Django view that runs a slow script server-side, and streams the script output to Javascript. This is the bit of code that runs the script and turns the output into a stream of events:

def stream_output(proc):
    '''
    Take a subprocess.Popen object and generate its output, line by line,
    annotated with "stdout" or "stderr". At process termination it generates
    one last element: ("result", return_code) with the return code of the
    process.
    '''
    fds = [proc.stdout, proc.stderr]
    bufs = [b"", b""]
    types = ["stdout", "stderr"]
    # Set both pipes as non-blocking
    for fd in fds:
        fcntl.fcntl(fd, fcntl.F_SETFL, os.O_NONBLOCK)
    # Multiplex stdout and stderr with different prefixes
    while len(fds) > 0:
        s = select.select(fds, (), ())
        for fd in s[0]:
            idx = fds.index(fd)
            buf = fd.read()
            if len(buf) == 0:
                fds.pop(idx)
                if len(bufs[idx]) != 0:
                    yield types[idx], bufs.pop(idx)
                types.pop(idx)
            else:
                bufs[idx] += buf
                lines = bufs[idx].split(b"\n")
                bufs[idx] = lines.pop()
                for l in lines:
                    yield types[idx], l
    res = proc.wait()
    yield "result", res

I used to just serialize its output and stream it to JavaScript, then monitor onreadystatechange on the XMLHttpRequest object browser-side, but then it started failing on Chrome, which won't trigger onreadystatechange until something like a kilobyte of data has been received.

I didn't want to stream a kilobyte of padding just to work-around this, so it was time to try out Server-sent events. See also this.

This is the Django view that sends the events:

class HookRun(View):
    def get(self, request):
        proc = run_script(request)
        def make_events():
            for evtype, data in utils.stream_output(proc):
                if evtype == "result":
                    yield "event: {}\ndata: {}\n\n".format(evtype, data)
                else:
                    yield "event: {}\ndata: {}\n\n".format(evtype, data.decode("utf-8", "replace"))

        return http.StreamingHttpResponse(make_events(), content_type='text/event-stream')

    @method_decorator(never_cache)
    def dispatch(self, *args, **kwargs):
        return super().dispatch(*args, **kwargs)

And this is the template that renders it:

{% extends "base.html" %}
{% load i18n %}

{% block head_resources %}
{{block.super}}
<style type="text/css">
.out {
    font-family: monospace;
    padding: 0;
    margin: 0;
}
.stdout {}
.stderr { color: red; }
.result {}
.ok { color: green; }
.ko { color: red; }
</style>
{# Polyfill for IE, typical... https://github.com/remy/polyfills/blob/master/EventSource.js #}
<script src="{{ STATIC_URL }}js/EventSource.js"></script>
<script type="text/javascript">
$(function() {
    // Manage spinners and other ajax-related feedback
    $(document).nav();
    $(document).nav("ajax_start");

    var out = $("#output");

    var event_source = new EventSource("{% url 'session_hookrun' name=name %}");
    event_source.addEventListener("open", function(e) {
      //console.log("EventSource open:", arguments);
    });
    event_source.addEventListener("stdout", function(e) {
      out.append($("<p>").attr("class", "out stdout").text(e.data));
    });
    event_source.addEventListener("stderr", function(e) {
      out.append($("<p>").attr("class", "out stderr").text(e.data));
    });
    event_source.addEventListener("result", function(e) {
      if (+e.data == 0)
          out.append($("<p>").attr("class", "result ok").text("{% trans 'Success' %}"));
      else
          out.append($("<p>").attr("class", "result ko").text("{% trans 'Script failed with code' %} " + e.data));
      event_source.close();
      $(document).nav("ajax_end");
    });
    event_source.addEventListener("error", function(e) {
      // There is an annoyance here: e does not contain any kind of error
      // message.
      out.append($("<p>").attr("class", "result ko").text("{% trans 'Error receiving script output from the server' %}"));
      console.error("EventSource error:", arguments);
      event_source.close();
      $(document).nav("ajax_end");
    });
});
</script>
{% endblock %}

{% block content %}

<h1>{% trans "Processing..." %}</h1>

<div id="output">
</div>

{% endblock %}

It's simple enough, it seems reasonably well supported besides needing a polyfill for IE and, astonishingly, it even works!

Posted Sun Dec 14 16:32:10 2014 Tags:

SSL "protection"

In my experience with my VPS, setting up pretty much any service exposed to the internet, even a simple thing to put a calendar in my phone requires an SSL certificate, which costs money, which needs to be given to some corporation or another.

When the only way to get protection from a threat is to give money to some big fish, I feel like I'm being forced to pay protection money.

I look forward to this.

Posted Thu Dec 11 15:35:25 2014 Tags:

Radicale and DAVDroid

radicale and DAVdroid appeal to me. Let's try to make the whole thing work.

A self-signed SSL certificate

Generating the certificate:

    openssl req -nodes -x509 -newkey rsa:2048 -keyout cal-key.pem -out cal-cert.pem -days 3650
    [...]
    Country Name (2 letter code) [AU]:IT
    State or Province Name (full name) [Some-State]:Bologna
    Locality Name (eg, city) []:
    Organization Name (eg, company) [Internet Widgits Pty Ltd]:enricozini.org
    Organizational Unit Name (eg, section) []:
    Common Name (e.g. server FQDN or YOUR name) []:cal.enricozini.org
    Email Address []:postmaster@enricozini.org

Installing it on my phone:

    openssl x509 -in cal-cert.pem -outform DER -out cal-cert.crt
    adb push cal-cert.crt /mnt/sdcard/
    enrico --follow-instructions http://davdroid.bitfire.at/faq/entry/importing-a-certificate

Installing radicale in my VPS

An updated radicale package, with this patch to make it work with DAVDroid:

    apt-get source radicale
    # I reviewed 063f7de7a2c7c50de5fe3f8382358f9a1124fbb6
    git clone https://github.com/Kozea/Radicale.git
    Move the python code from git to the Debian source
    dch -v 0.10~enrico  "Pulled in the not yet released 0.10 work from upstream"
    debuild -us -uc -rfakeroot

Install the package:

    # dpkg -i python-radicale_0.10~enrico0-1_all.deb
    # dpkg -i radicale_0.10~enrico0-1_all.deb

Create a system user to run it:

    # adduser --system --disabled-password radicale

Configure it for mod_wsgi with auth done by Apache:

    # For brevity, this is my config file with comments removed

    [storage]
    # Storage backend
    # Value: filesystem | multifilesystem | database | custom
    type = filesystem

    # Folder for storing local collections, created if not present
    filesystem_folder = /var/lib/radicale/collections

    [logging]
    config = /etc/radicale/logging

Create the wsgi file to run it:

    # mkdir /srv/radicale
    # cat <<EOT > /srv/radicale/radicale.wsgi
    import radicale
    radicale.log.start()
    application = radicale.Application()
    EOT
    # chown radicale.radicale /srv/radicale/radicale.wsgi
    # chmod 0755 /srv/radicale/radicale.wsgi

Make radicale commit to git

    # apt-get install python-dulwich
    # cd /var/lib/radicale/collections
    # git init
    # chown radicale.radicale -R /var/lib/radicale/collections/.git

Apache configuration

Add a new site to apache:

    $ cat /etc/apache2/sites-available/cal.conf
    # For brevity, this is my config file with comments removed
    <IfModule mod_ssl.c>
    <VirtualHost *:443>
            ServerName cal.enricozini.org
            ServerAdmin enrico@enricozini.org

            Alias /robots.txt /srv/radicale/robots.txt
            Alias /favicon.ico /srv/radicale/favicon.ico

            WSGIDaemonProcess radicale user=radicale group=radicale threads=1 umask=0027 display-name=%{GROUP}
            WSGIProcessGroup radicale
            WSGIScriptAlias / /srv/radicale/radicale.wsgi

            <Directory /srv/radicale>
                    # WSGIProcessGroup radicale
                    # WSGIApplicationGroup radicale
                    # WSGIPassAuthorization On
                    AllowOverride None
                    Require all granted
            </Directory>

            <Location />
                    AuthType basic
                    AuthName "Enrico's Calendar"
                    AuthBasicProvider file
                    AuthUserFile /usr/local/etc/radicale/htpasswd
                    Require user enrico
            </Location>

            ErrorLog{APACHE_LOG_DIR}/cal-enricozini-org-error.log
            LogLevel warn

            CustomLog{APACHE_LOG_DIR}/cal-enricozini-org-access.log combined

            SSLEngine on
            SSLCertificateFile    /etc/ssl/certs/cal.pem
            SSLCertificateKeyFile /etc/ssl/private/cal.key
    </VirtualHost>
    </IfModule>

Then enable it:

    # a2ensite cal.conf
    # service apache2 reload

Create collections

DAVdroid seems to want to see existing collections on the server, so we create them:

    $ apt-get install cadaver
    $ cat <<EOT > /tmp/empty.ics
    BEGIN:VCALENDAR
    VERSION:2.0
    END:VCALENDAR
    EOT
    $ cat <<EOT > /tmp/empty.vcf
    BEGIN:VCARD
    VERSION:2.1
    END:VCARD
    EOT
    $ cadaver https://cal.enricozini.org
    WARNING: Untrusted server certificate presented for `cal.enricozini.org':
    [...]
    Do you wish to accept the certificate? (y/n) y
    Authentication required for Enrico's Calendar on server `cal.enricozini.org':
    Username: enrico
    Password: ****
    dav:/> cd enrico/contacts.vcf/
    dav:/> put /tmp/empty.vcf
    dav:/> cd ../calendar.ics/
    dav:/> put /tmp/empty.ics
    dav:/enrico/calendar.ics/> ^D
    Connection to `cal.enricozini.org' closed.

DAVdroid configuration

  1. Add a new DAVdroid sync account
  2. Use server/username configuration
  3. For server, use https:////
  4. Add username and password

It should work.

Related links

Posted Tue Dec 9 16:35:50 2014 Tags:

The smell of email

This was written in response to a message with a list of demotivating behaviours in email interactions, like fingerpointing, aggressiveness, resistance when being called out for misbehaving, public humiliation for mistakes, and so on

There are times when I stumble on an instance of the set of things that were mentioned, and I think "ok, today I feel like doing some paid work rather than working on Debian".

If another day I wake up deciding to enjoy working on Debian, which I greatly do, I try and make sure that I can focus on bits of Debian where I don't stumble on any instances of the set of things that were mentioned.

Then I stumble on Gregor's GDAC and I feel like I'd happily lose one day of pay right now, and have fun with Debian.

I feel like Debian is this big open kitchen populated by a lot of people:

  • some dump shit
  • some poke the shit with a stick, contributing to the spread of the smell
  • some carefully clean up the shit, which in the short term still contributes to the smell, but makes things better in the long term
  • some prepare and cook, making a nice smell of food and NOMs
  • some try out the food and tell us how good it was

I have fun cooking and tring out the food. I have fun being around people who cook and try out the food.

The fun in the kitchen seems to be correlated to several things, one of which is that it seems to be inversely proportional to the stink.

I find this metaphore interesting, and I will start thinking about the smell of a mailing list post. I expect it should put posts into perspective, I expect I will develop an instinct for it, so that I won't give a stinky post the same importance of a post that smells of food.

I also expect that the more I learn to tell the smell of food from the smell of shit, the more I can help cleaning it, and the more I can help telling people who repeatedly contribute to the stink to please try cooking instead, or failing that, just try and stay out of the kitchen.

Posted Fri Dec 5 11:51:49 2014 Tags:

C++11 talk examples

On 2014-11-27 I gave a talk about C++ and new features introduced with C++11: these are the examples. They are all licensed under the wtfpli version 2. See cxx11-talk-notes for the talk notes.

Note that the wrapper interfaces turns errors from the underlying libraries into exceptions, so the method calls just do what they should, without the need of documenting special return values for error messages, and removing the need for each library to implement yet another way of reporting errors.

Also note that all wrapper objects do RAII: you create them and they clean after themselves when they go out of scope.

The wrapper objects also have cast operators to make them behave as the pointer or handle that they are wrapping, so that they can be transparently passed to the underlying libraries.

(note: I had to add U+2063 INVISIBLE SEPARATOR to prevent noreturn statements to be misinterpreted by the blog formatter. If you copypaste the code and encounter issues, you may want to delete the noreturn statements and retype them)

A gcrypt hash class

This class is a light wrapper around gcrypt's hashing functions.

ezhash.h

#ifndef EZHASH_H
#define EZHASH_H

#include <string>
#include <gcrypt.h>

namespace ezhash {

class Hash
{
protected:
    // members can now be initialized just like this, without needing to repeat
    // their default assignment in every constructor
    gcry_md_hd_t handle = nullptr;

public:
    Hash(int algo, unsigned int flags=0);
    ~Hash();

    // Assign 'delete' to a method to tell the compiler not to generate it
    // automatically. In this case, we make the object non-copiable.
    Hash(const Hash&) = delete;
    Hash(const Hash&&) = delete;
    Hash& operator=(const Hash&) = delete;

    // Add a buffer to the hash
    void hash_buf(const std::string& buf);

    // Add the contents of a file to the hash
    void hash_file(int fd);

    // Get a string with the hexadecimal hash
    std::string read_hex(int algo=0);

    /// Pretend that we are a gcry_md_hd_t handle
    operator gcry_md_hd_t() { return handle; }
};

}

#endif

ezhash.cpp

#include "ezhash.h"
#include <unistd.h>
#include <errno.h>
#include <string>
#include <cstring>
#include <sstream>
#include <iomanip>
#include <stdexcept>

using namespace std;

namespace ezhash {

namespace {

// noreturn attribute, to tell the compiler that this function never returns
[[noreturn]] void throw_gcrypt_error(gcry_error_t err)
{
    string msg;
    msg += gcry_strsource(err);
    msg += "/";
    msg += gcry_strerror(err);
    throw runtime_error(msg);
}

string errno_str(int error)
{
    char buf[256];
#if (_POSIX_C_SOURCE >= 200112L || _XOPEN_SOURCE >= 600) && ! _GNU_SOURCE
    strerror_r(errno, buf, 256);
    string res(buf);
#else
    string res(strerror_r(errno, buf, 256));
#endif
    return res;
}

[[noreturn]] void throw_libc_error(int error)
{
    throw runtime_error(errno_str(error));
}

}


Hash::Hash(int algo, unsigned int flags)
{
    gcry_error_t err = gcry_md_open(&handle, algo, flags);
    if (err) throw_gcrypt_error(err);
}

Hash::~Hash()
{
    gcry_md_close(handle);
}

void Hash::hash_buf(const std::string& buf)
{
    gcry_md_write(handle, buf.data(), buf.size());
}

void Hash::hash_file(int fd)
{
    char buf[4096];
    while (true)
    {
        ssize_t res = ::read(fd, buf, 4096);
        if (res < 0) ezfs::throw_libc_error();
        if (res == 0) break;
        gcry_md_write(handle, buf, res);
    }
}

std::string Hash::read_hex(int algo)
{
    unsigned char* res = gcry_md_read(handle, algo);

    unsigned int len = gcry_md_get_algo_dlen(
            algo == 0 ? gcry_md_get_algo(handle) : algo);

    // Format the hash into a hex digit
    stringstream hexbuf;
    hexbuf << hex << setfill('0');
    for (unsigned i = 0; i < len; ++i)
        hexbuf << setw(2) << (unsigned)res[i];

    return hexbuf.str();
}

}

Example usage

        ezhash::Hash sha256(GCRY_MD_SHA256);
        sha256.hash_buf("ciao\n");
        sha256.hash_buf("foo\n");
        cout << sha256.read_hex() << endl;

Simple sqlite bindings

Remarkably simple sqlite3 bindings based on lambda callbacks.

ezsqlite.h

#ifndef EZSQLITE_H
#define EZSQLITE_H

#include <sqlite3.h>
#include <string>
#include <functional>
#include <stdexcept>

namespace ezsqlite {

/// RAII wrapper around a sqlite3 database handle
class DB
{
protected:
    sqlite3* handle = nullptr;

public:
    // Open a connection to a SQLite database
    DB(const std::string& filename, int flags=SQLITE_OPEN_READWRITE | SQLITE_OPEN_CREATE);
    DB(const DB&) = delete;
    DB(const DB&&) = delete;
    DB& operator=(const DB&) = delete;
    ~DB();

    /**
     * Execute a query, optionally calling 'callback' on every result row
     *
     * The arguments to callback are:
     *  1. number of columns
     *  2. text values of the columns
     *  3. names of the columns
     */
    // std::function can be used to wrap any callable thing in C++
    // see: http://en.cppreference.com/w/cpp/utility/functional/function
    void exec(const std::string& query, std::function<bool(int, char**, char**)> callback=nullptr);

    /// Pretend that we are a sqlite3 pointer
    operator sqlite3*() { return handle; }
};

}

#endif

ezsqlite.cpp

#include "ezsqlite.h"

namespace ezsqlite {

DB::DB(const std::string& filename, int flags)
{
    int res = sqlite3_open_v2(filename.c_str(), &handle, flags, nullptr);
    if (res != SQLITE_OK)
    {
        // From http://www.sqlite.org/c3ref/open.html
        // Whether or not an error occurs when it is opened, resources
        // associated with the database connection handle should be
        // released by passing it to sqlite3_close() when it is no longer
        // required.
        std::string errmsg(sqlite3_errmsg(handle));
        sqlite3_close(handle);
        throw std::runtime_error(errmsg);
    }
}

DB::~DB()
{
    sqlite3_close(handle);
}

namespace {

// Adapter to have sqlite3_exec call a std::function
int exec_callback(void* data, int columns, char** values, char** names)
{
    std::function<bool(int, char**, char**)> cb = *static_cast<std::function<bool(int, char**, char**)>*>(data);
    return cb(columns, values, names);
}

}

void DB::exec(const std::string& query, std::function<bool(int, char**, char**)> callback)
{
    char* errmsg;
    void* cb = callback ? &callback : nullptr;
    int res = sqlite3_exec(handle, query.c_str(), exec_callback, cb, &errmsg);
    if (res != SQLITE_OK && errmsg)
    {
        // http://www.sqlite.org/c3ref/exec.html
        //
        // If the 5th parameter to sqlite3_exec() is not NULL then any error
        // message is written into memory obtained from sqlite3_malloc() and
        // passed back through the 5th parameter. To avoid memory leaks, the
        // application should invoke sqlite3_free() on error message strings
        // returned through the 5th parameter of of sqlite3_exec() after the
        // error message string is no longer needed. 
        std::string msg(errmsg);
        sqlite3_free(errmsg);
        throw std::runtime_error(errmsg);
    }
}

}

Example usage

    // Connect to the database
    ezsqlite::DB db("erlug.sqlite");

    // Make sure we have a table
    db.exec(R"(
        CREATE TABLE IF NOT EXISTS files (
                name TEXT NOT NULL,
                sha256sum TEXT NOT NULL
        )
    )");

    // Read the list of files that we know
    map<string, string> files;
    db.exec("SELECT name, sha256sum FROM files", [&](int columns, char** vals, char** names) {
        if (columns != 2) return false;
        files.insert(make_pair(vals[0], vals[1]));
        return true;
    });

A fast Directory object

This is a lightweight wrapper around O_PATH file descriptors for directories. I'd love to see a library of well-maintained and thin C++ bindings around libc, that do little more than turning errors into exceptions and making it also work with std::string buffers.

ezfs.h

#ifndef EZFS_H
#define EZFS_H

#include <string>
#include <functional>
#include <memory>
#include <cerrno>
#include <sys/types.h>
#include <sys/stat.h>
#include <fcntl.h>
#include <unistd.h>
#include <dirent.h>

namespace ezfs {

class Directory
{
protected:
    int handle = -1;

public:
    Directory(const std::string& pathname, int flags=0);
    ~Directory();
    Directory(const Directory&) = delete;
    Directory(const Directory&&) = delete;
    Directory& operator=(const Directory&) = delete;

    /// List the directory contents
    void ls(std::function<void(const dirent&)> callback);

    int open(const std::string& relpath, int flags, mode_t mode=0777);
};

std::string errno_str(int error=errno);
[[noreturn]] void throw_libc_error(int error=errno);

}

#endif

ezfs.cpp

#include "ezfs.h"
#include <stdexcept>
#include <memory>
#include <cstring>
#include <cstdlib>
#include <string>
#include <linux/limits.h>

using namespace std;

namespace ezfs {

string errno_str(int error)
{
    char buf[256];
#if (_POSIX_C_SOURCE >= 200112L || _XOPEN_SOURCE >= 600) && ! _GNU_SOURCE
    strerror_r(errno, buf, 256);
    string res(buf);
#else
    string res(strerror_r(errno, buf, 256));
#endif
    return res;
}

[[noreturn]] void throw_libc_error(int error)
{
    throw runtime_error(errno_str(error));
}

Directory::Directory(const std::string& pathname, int flags)
{
    handle = ::open(pathname.c_str(), O_PATH | O_DIRECTORY | flags);
    if (handle < 0) throw_libc_error();
}

Directory::~Directory()
{
    ::close(handle);
}

void Directory::ls(std::function<void(const dirent&)> callback)
{
    int fd = openat(handle, ".", O_DIRECTORY);
    if (fd < 0) throw_libc_error();

    // RAII Self-cleaning DIR object
    unique_ptr<DIR, std::function<void(DIR*)>> dir(fdopendir(fd), [](DIR* dir) { if (dir) closedir(dir); });
    if (!dir)
    {
        // fdopendir(3): After a successful call to fdopendir(), fd is used
        // internally by the implementation, and should not otherwise be used
        // by the application.
        //
        // but if the fdopendir call was not successful, fd is not managed by
        // DIR, and we still need to close it, otherwise we leak a file
        // descriptor.
        //
        // However, close() may modify errno, so we take note of the errno set
        // by fdopendir and raise the exception based on that.
        int fdopendir_errno = errno;
        close(fd);
        throw_libc_error(fdopendir_errno);
    }

    // Size the dirent buffer properly
    const unsigned len = offsetof(dirent, d_name) + PATH_MAX + 1;
    unique_ptr<dirent, std::function<void(void*)>> dirbuf((dirent*)malloc(len), free);

    while (true)
    {
        dirent* res;
        int err = readdir_r(dir.get(), dirbuf.get(), &res);

        // End of directory contents
        if (err == 0)
        {
            if (res)
                callback(*res);
            else
                break;
        } else
            throw_libc_error(err);
    }
}

int Directory::open(const std::string& relpath, int flags, mode_t mode)
{
    int res = openat(handle, relpath.c_str(), flags, mode);
    if (res < 0) throw_libc_error();
    return res;
}

}

Example usage

        // This is almost the equivalent of running "sha256sum ."
        ezfs::Directory dir(".");
        dir.ls([&](const dirent& d) {
            if (d.d_type != DT_REG) return;

            ezhash::Hash sha256(GCRY_MD_SHA256);
            // I have no RAII wrapper around file handles at the moment, so
            // I'll have to use a try/catch for cleaning up after errors
            int fd = dir.open(d.d_name, O_RDONLY);
            try {
                sha256.hash_file(fd);
                close(fd);
            } catch (...) {
                close(fd);
                throw;
            }

            cout << sha256.read_hex() << "  " << d.d_name << endl;
        });
Posted Sun Nov 30 19:26:26 2014 Tags:

C++11 talk notes

On 2014-11-27 I gave a talk about C++ and new features introduced with C++11: these are the talk notes. See cxx11-talk-examples for the examples.

(note: I had to add U+2063 INVISIBLE SEPARATOR to prevent noreturn statements to be misinterpreted by the blog formatter. If you copypaste the code and encounter issues, you may want to delete the noreturn statements and retype them)

Overview of programming languages

It has to be as fast as possible, so interpreted languages are out.

You don't want to micro manage memory, so C is out.

You don't want to require programmers to have a degree, so C++ is out.

You want fast startup and not depend on a big runtime, so Java is out.

[...]

(Bram Moolenaar)

C++ secret cultist protip

Do not call up what you cannot put down.

C++ is a compiled language

It is now possible to use the keyword constexpr to mark functions and objects that can be used at compile time:

/*
 * constexpr tells the compiler that a variable or function can be evaluated at
 * compile time.
 *
 * constexpr functions can also be run at run time, if they are called with
 * values not known at compile time.
 *
 * See http://en.cppreference.com/w/cpp/language/constexpr for more nice examples
 *
 * It can be used to avoid using constants in code, and using instead functions
 * for computing hardware bitfields or physical values, without losing in
 * efficiency.
 */

#include <iostream>

using namespace std;

constexpr int factorial(int n)
{
    return n <= 1 ? 1 : (n * factorial(n-1));
}

int main()
{
    cout << "Compile time factorial of 6: " << factorial(6) << endl;

    cout << "Enter a number: ";
    int a;
    cin >> a;

    cout << "Run time factorial of " << a << ": " << factorial(a) << endl;
}

See also this for more nice examples. See this and this for further discussion.

Multiline strings

        const char* code = R"--(
          printf("foo\tbar\n");
          return 0;
        )--";

See this.

C++ memory management protip

RAII: Resource Acquisition Is Instantiation

This is not new in C++11, but in my experience I have rarely seen it mentioned in C++ learning material, and it does make a huge difference in my code.

See this and this for details.

Constructors and member initializer lists

Initializers in curly braces now have their own type: std::initializer_list:

#include <string>
#include <iostream>
#include <unordered_set>

using namespace std;

// std::initializer_list<…>
//   will have as its value all the elements inside the curly braces

string join(initializer_list<string> strings)
{
    string res;
    for (auto str: strings)
    {
        if (!res.empty())
            res += ", ";
        res += str;
    }
    return res;
}

int main()
{
    unordered_set<string> blacklist{ ".", "..", ".git", ".gitignore" };

    cout << join({ "antani", "blinda" }) << endl;
}

See this for details, including the new uniform initialization trick of omitting parentesis in constructors so that you can call normal constructors and initializer_list constructors with the same syntax, which looks like an interesting thing when writing generic code in templates.

Type inference

I can now use auto instead of a type to let the compiler automatically compute the value of something I assign to:

        auto i = 3 + 2;

        // See also https://github.com/esseks/monicelli
        vector<string> names{ "antani", "blinda", "supercazzola" };
        for (auto i = names.cbegin(); i != names.cend(); ++i)
            cout << i;

        template<typename T>
        T frobble(const T& stuff)
        {
             // This will work whatever type is returned by stuff.read()
             auto i = stuff.read();
             // …
        }

See this for more details.

Range-based for loop

C++ now has an equivalent of the various foreach constructs found in several interpreted languages!

        for (auto i: list_of_stuff)
                cout << i << endl;

        for (auto n: {0,1,2,3,4,5})
                cout << n << endl;

        // This construct:
        for (auto i: stuff)

        // If stuff is an array, it becomes:
        for (i = stuff, i < stuff + sizeof(stuff) / sizeof(stuff[0]); ++i)

        // If stuff has .begin() and .end() methods it becomes:
        for (i = stuff.begin(); i != stuff.end(); ++i)

        // Otherwise it becomes:
        for (i = begin(stuff); i != end(stuff); ++i)

        // And you can define begin() and end() functions for any type you
        // want, at any time

See this and this for details.

Lambda functions and expressions

Lambdas! Closures!

Something like this:

// JavaScript
var add = function(a, b) { return a + b; }
# Python
add = lambda a, b: a + b

Becomes this:

auto add = [](int a, int b) { return a + b; }

And something like this:

// JavaScript
var a = 0;
$.each([1, 2, 3, 4], function(idx, el) { a += el });

Becomes this:

unsigned a = 0;
std::for_each({ 1, 2, 3, 4 }, [&a](int el) { return a += el; });

See this, this and this.

Tuple types

C++ now has a std::tuple type, that like in Python can be used to implement functions that return multiple values:

        tuple<int, string, vector<string>> parse_stuff()
        {
                return make_tuple(id, name, values);
        }

        string name; vector<string> values;

        // std::ignore can be used to throw away a result
        tie(ignore, name, values) = parse_stuff();

        // std::tie can also be used to do other kind of
        // multi-operations besides assignment:
        return tie(a, b, c) < tie(a1, b1, c1);
        // Is the same as:
        if (a != a1) return a < a1;
        if (b != b1) return b < b1;
        return c < c1;

See here, here and here.

Regular expressions

We now have regular expressions!

        std::regex re(R"((\w+)\s+(\w+))");
        string s("antani blinda");
        smatch res;

        if (regex_match(s, res, re))
            cout << "OK " << res[1] << " -- " << res[2] << endl;

The syntax is ECMAScript by default and can be optionally changed to basic, extended, awk, grep, or egrep.

See here and here.

General-purpose smart pointers

There is std::unique_ptr to code memory ownership explicitly, and std::shared_ptr as a reference counted pointer, and smart pointers can have custom destructors:

    unique_ptr<dirent, std::function<void(void*)>> dirbuf((dirent*)malloc(len), free);

See here and here.

Miscellaneous other cool things

Standard attribute specifiers

string errno_str(int error)
{
    char buf[256];
#if (_POSIX_C_SOURCE >= 200112L || _XOPEN_SOURCE >= 600) && ! _GNU_SOURCE
    strerror_r(errno, buf, 256);
    string res(buf);
#else
    string res(strerror_r(errno, buf, 256));
#endif
    return res;
}

[[noreturn]] void throw_libc_error(int error)
{
    throw runtime_error(errno_str(error));
}

See here.

Hash tables

See here and look at the new containers unordered_set, unordered_map, unordered_multiset, and unordered_multimap.

Multithreading

There is a standard threading model, with quite a bit of library support: see here, here, here, and here for atomic data structures.

Variadic templates

Templates can now take variable number of arguments, and that opens possibilities for interesting code generation, like implementing a generic, type-safe printf statement, or something like this:

db.query(R"(
   INSERT INTO table NAMES (id, name, description)
     VALUES (?, ?, ?)
)", 4, "genio", "fantasia, intuizione, decisione, e velocità di esecuzione");

See here and here.

Essential tools

You need at least g++ 4.8 or clang 3.3 to have full C++11 support. They will be both available in jessie, and for wheezy you can use the nightly clang packages repository. I cannot think of a good excuse not to use -Wall on new code.

scan-build from clang is another nice resource for catching even more potential problems at compile time.

valgrind is a great tool for runtime code analysis: valgrind --tool=memcheck (the default) will check your program for wrong memory accesses and memory leaks. valgrind --tool=callgrind will trace function calls for profiling, to be analyzed with kcachegrind. valgrind --tool=helgrind can check multi-threaded programs for suspicious concurrent memory accesse patterns.

And of course gdb: a nice trick with C++ is to issue catch throw to get a breakpoint at the point where an exception is being thrown. help catch provides a list of other interesting catch examples.

Coredump tips: ulimit -c to enable core dumps, triggering a core dump with ^\, opening a core with gdb program core, and more details on man 5 core.

An extra gdb tip, which is not related to C++ but helped me considerably recently, is that it can be attached to running python programs to get a live Python traceback.

Posted Sun Nov 30 18:52:14 2014 Tags:

Calypso and DAVDroid

calypso and DAVdroid appeal to me. Let's try to make the whole thing work.

Update: radicale seems to also support git as a backend, and I plan to give it a try, too.

A self-signed SSL certificate

Generating the certificate:

$ openssl req -nodes -x509 -newkey rsa:2048 -keyout cal-key.pem -out cal-cert.pem -days 3650
[...]
Country Name (2 letter code) [AU]:IT
State or Province Name (full name) [Some-State]:Bologna
Locality Name (eg, city) []:
Organization Name (eg, company) [Internet Widgits Pty Ltd]:enricozini.org
Organizational Unit Name (eg, section) []:
Common Name (e.g. server FQDN or YOUR name) []:cal.enricozini.org
Email Address []:postmaster@enricozini.org

Installing it on my phone:

$ openssl x509 -in cal-cert.pem -outform DER -out cal-cert.crt
$ adb push cal-cert.crt /mnt/sdcard/
$ enrico --follow-instructions http://davdroid.bitfire.at/faq/entry/importing-a-certificate

Installing calypso in my VPS

An updated calypso package:

$ git clone git://keithp.com/git/calypso
$ git checkout debian -b enrico
$ git remote add chrysn  git://prometheus.amsuess.com/calypso-patches
$ git fetch chrysn
$ git merge chrysn/chrysn/integration
$ dch -v 1.4+enrico  "Merged with chrysn integration branch"
$ debuild -us -uc -rfakeroot

Install the package:

# dpkg -i calypso_1.4+enrico_all.deb

Create a system user to run it:

# adduser --system --disabled-password calypso
# chsh calypso  # /bin/dash

Make it run at boot time (based on calypso-init from the git repo):

# cat /etc/default/calypso
CALYPSO_OPTS="-d -P $PIDFILE"
# diff -Nau calypso-init calypso-init.enrico
--- calypso-init        2014-11-26 11:50:35.301001194 +0100
+++ calypso-init.enrico 2014-11-26 12:18:16.564138554 +0100
@@ -62,8 +62,8 @@
        || return 1

    mkdir -p $(dirname $PIDFILE)
-       chown calypso:calypso $(dirname $PIDFILE)
-       start-stop-daemon --start -c $NAME --quiet --pidfile $PIDFILE --exec $DAEMON -- \
+       chown calypso:nogroup $(dirname $PIDFILE)
+       start-stop-daemon --start -c $NAME:nogroup --quiet --pidfile $PIDFILE --exec $DAEMON -- \
        $CALYPSO_OPTS \
        || return 2
    # Add code here, if necessary, that waits for the process to be ready
# cp calypso-init.enrico /etc/init.d/calypso
# update-rc.d calypso defaults

Setting up the database

# su - calypso

Certificates and server setup:

$ mkdir .config/calypso/certs
$ mv cal-key.pem .config/calypso/certs/cal.key
$ mv cal-cert.pem .config/calypso/certs/cal.pem
$ chmod 0600 .config/calypso/certs/*
$ cat > .config/calypso/config << EOF
[server]
certificate=/home/calypso/.config/calypso/certs/cal.pem
key=/home/calypso/.config/calypso/certs/cal.key

[acl]
type=htpasswd
encryption=sha1
filename=/home/calypso/.config/calypso/htpasswd
    EOF

User passwords:

    $ htpasswd -s .config/calypso/htpasswd enrico

Database initialization:

$ mkdir -p .config/calypso/calendars
$ cd .config/calypso/calendars
$ git init
    $ cat > .calypso-collection << EOF
[collection]
is-calendar = True
is-addressbook = False
displayname = Test
description = Test calendar
EOF
    $ git add .calypso-collection
$ git commit --allow-empty -m'initialize new calendar'

Start the server

# /etc/init.d/calypso start

DAVdroid configuration

  1. Add a new DAVdroid sync account
  2. Use server/username configuration
  3. For server, use https://:5233
  4. Add username and password

It should work.

Related links

Posted Wed Nov 26 12:38:22 2014 Tags:

A mock webserver to use for unit testing HTTP clients

With python -m SimpleHTTPServer it's easy to bring up an HTTP server to use to test HTTP client code, however it only supports GET requests, and I needed to test an HTTP client that needs to perform a file upload.

It took way more than I originally expected to put this together, so here it is, hopefully saving other people (including future me) some time:

#!/usr/bin/python3

import http.server
import cgi
import socketserver
import hashlib
import json

PORT = 8081

class Handler(http.server.SimpleHTTPRequestHandler):
    def do_POST(self):
        info = {
            "method": "POST",
            "headers": { k: v for k, v in self.headers.items() },
        }

        # From https://snipt.net/raw/f8ef141069c3e7ac7e0134c6b58c25bf/?nice
        form = cgi.FieldStorage(
            fp=self.rfile,
            headers=self.headers,
            environ={'REQUEST_METHOD':'POST',
                     'CONTENT_TYPE':self.headers['Content-Type'],
                     })

        postdata = {}
        for k in form.keys():
            if form[k].file:
                buf = form.getvalue(k)
                postdata[k] = {
                    "type": "file",
                    "name": form[k].filename,
                    "size": len(buf),
                    # json.dumps will not serialize a byte() object, so we
                    # return the shasum instead of the file body
                    "sha256": hashlib.sha256(buf).hexdigest(),
                }
            else:
                vals = form.getlist(k)
                if len(vals) == 1:
                    postdata[k] = {
                        "type": "field",
                        "val": vals[0],
                    }
                else:
                    postdata[k] = {
                        "type": "multifield",
                        "vals": vals,
                    }

        info["postdata"] = postdata

        resbody = json.dumps(info, indent=1)
        print(resbody)

        resbody = resbody.encode("utf-8")

        self.send_response(200)
        self.send_header("Content-type", "application/json")
        self.send_header("Content-Length", str(len(resbody)))
        self.end_headers()

        self.wfile.write(resbody)

class TCPServer(socketserver.TCPServer):
    # Allow to restart the mock server without needing to wait for the socket
    # to end TIME_WAIT: we only listen locally, and we may restart often in
    # some workflows
    allow_reuse_address = True

httpd = TCPServer(("", PORT), Handler)

print("serving at port", PORT)
httpd.serve_forever()
Posted Tue Nov 25 18:22:41 2014 Tags:

Fun and Sanity in Debian

A friend of mine recently asked: "is there anything happening in Debian besides systemd?"

Of course there is. He asked it 2 days after the freeze, which happened in time, and with an amazingly low RC bug count.

The most visible thing right now seems to be this endless init system argument, but there are fun and sane things in Debian. Many of them.

I think someone should put the spotlight on them, and here's my attempt.

Yesterday I set up a gobby document asking "What is now happening in Debian that is exciting, fun and sane?", and passed the link around the Cambridge Miniconf and some IRC channels.

Here are a few quotations that I collected:

The armhf and arm64 ports have for me been wonderful and exciting, and were a great time for me to start getting involved. (Jon "Aardvark" Ward)

We have a way of tracking random contributors, and as far as I know no other project has anything like it. (Enrico Zini)

codesearch.debian.net is an incredibly important resource, not just for us but for the free software community at large. (Ben Hutchings)

sources.debian.net is a very useful resource with lots of interested contributors, it received 10 OPW applicants (Stefano Zacchiroli)

It has never been easier to work on new infrastructure project thanks to the awesome work of the DSA team. We have dozens of contribution opportunities outside of just plain packaging. (Raphaël Hertzog)

The work on reproducible builds has achieved excellent results with 61.3% of packages being reproducible. (Paul Wise)

Porting arm64 has been (peversely) great fun. It's remarkably morish and I like nothing more than a tedious argument with autoconf macros. Working with lots of enthusiastic people from other teams, helping getting the port set up and build has been great - thank you everybody. (Wookey)

And here are random exciting things that were listed:

  • build-profile support (for bootstrapping) is all in jessie (dpkg, apt, sbuild, python-apt, debhelper, libconfig-model-dpkg-perl, lintian).
  • PointCloudLibrary (PCL) got migrated from Ubuntu to Debian
  • Long Term Support has arrived!
  • http://ci.debian.net
  • Debian is participating for the second time in OPW as mentor orga
  • ftp-master is getting an API
  • cross-toolchains for jessie are available
  • arm64/ppc64el ready to go into jessie
  • wheezy-backports is more useful and used than ever
  • we froze, in time, with a remarkably low RC bug count, and we have a concrete plan for getting from that to a release
Posted Sun Nov 9 16:10:48 2014 Tags:

cryptsetup password and parallel boot

Since parallel boot happened, during boot the cryptsetup password prompt in my system gets flooded with other boot messages.

I fixed it, as suggested in #764555, installing plymouth, then editing /etc/default/grub to add splash to GRUB_CMDLINE_LINUX_DEFAULT:

GRUB_CMDLINE_LINUX_DEFAULT="quiet splash"

Besides showing pretty pictures (and most importantly, getting them out of my way if I press ESC), plymouth also provides a user prompt that works with parallel boot which sounds like what I needed.

Posted Fri Oct 24 10:20:22 2014 Tags:
Posted Sat Jun 6 00:57:39 2009

Pages about OpenMoko.

Released nodm 0.7

I have released version 0.7 of nodm.

It only fixes one silly typo in autotools, which made it fail to build on Fedora.

Posted Sun May 23 21:36:52 2010 Tags:

Released nodm 0.6

I have released version 0.6 of nodm.

It is purely a bug fix release, trying harder to detect a console in order to get rid of a bug introduced with version 0.5

Posted Mon Aug 3 12:34:16 2009 Tags:

Released nodm 0.5

I have released version 0.5 of nodm.

New features:

  • truncate ~/.xsession-errors on startup: finally that file stops growing, and growing, and growing...
  • dynamic VT allocation: it can now avoid opening a virtual terminal if it is already in use.
Posted Fri Jul 24 02:29:55 2009 Tags:

Getting dbus signatures right from Vala

I am trying to play a bit with Vala on the FreeRunner.

The freesmartphone.org stack on the OpenMoko is heavily based on DBus. Using DBus from Vala is rather simple, if mostly undocumented: you get a few examples in the Vala wiki and you make do with those.

All works fine with simple methods. But what with providing callbacks to signals that have complex nested structures in their signatures, like aa{sv}? You try, and then if you don't get the method signature right, the signal is just silently not delivered because it does not match the method signature.

So this is how to provide a callback to org.freesmartphone.Usage.ResourceChanged, with signature sba{sv}:

public void on_resourcechanged(dynamic DBus.Object pos,
                   string name,
                   bool state,
                   HashTable<string, Value?> attributes)
{
    stderr.printf("Resource %s changed\n", name);
}

And this is how to provide a callback to org.freesmartphone.GPS.UBX.DebugPacket, with signature siaa{sv}:

protected void on_ubxdebug_packet(dynamic DBus.Object ubx, string clid, int length,
        HashTable<string, Value?>[] wrongdata)
{
    stderr.printf("Received UBX debug packet");

    // Ugly ugly work-around
    PtrArray< HashTable<string, Value?> >* data = (PtrArray< HashTable<string, Value?> >)wrongdata;

    stderr.printf("%u elements received", data->len);
}

What is happening here is that the only method signature that I found matching the dbus signature is this one. However, the unmarshaller for some reason gets it wrong, and passes a PtrArray instead of a HashTable array. So you need to cast it back to what you've actually been passed.

Figuring all this out took several long hours and was definitely not fun.

Posted Wed Jul 15 12:30:50 2009 Tags:

Mapping using the Openmoko FreeRunner headset

The FreeRunner has a headset which includes a microphone and a button. When doing OpenStreetMap mapping, it would be very useful to be able to keep tangogps on the display and be able to mark waypoints using the headset button, and to record an audio track using the headset microphone.

In this way, I can use tangogps to see where I need to go, where it's already mapped and where it isn't, and then I can use the headset to mark waypoints corresponding to the audio track, so that later I can take advantage of JOSM's audio mapping features.

Enter audiomap:

$ audiomap --help
Usage: audiomap [options]

Create a GPX and audio trackFind the times in the wav file when there is clear
voice among the noise

Options:
  --version      show program's version number and exit
  -h, --help     show this help message and exit
  -v, --verbose  verbose mode
  -m, --monitor  only keep the GPS on and monitor satellite status
  -l, --levels   only show input levels

If called without parameters, or with -v which is suggested, it will:

  1. Fix the mixer settings so that it can record from the headset and detect headset button presses.
  2. Show a monitor of GPS satellite information until it gets a fix.
  3. Synchronize the system time with the GPS time so that the timestamps of the files that are created afterwards are accurate.
  4. Start recording a GPX track.
  5. Start recording audio.
  6. Record a GPX waypoint for every headset button press.

When you are done, you stop audiomap with ^C and it will properly close the .wav file, close the tags in the GPX waypoint and track files and restore the mixer settings.

You can plug the headset out and record using the handset microphone, but then you will not be able to set waypoints until you plug the headset back in.

After you stop audiomap, you will have a track, waypoints and .wav file ready to be loaded in JOSM.

Big thanks go to Luca Capello for finding out how to detect headset button presses.

Posted Sun Jun 7 23:51:37 2009 Tags:

Simple tool to query the GPS using the OpenMoko FSO stack

I was missing a simple command line tool that allows me to perform basic GPS queries in shellscripts.

Enter getgps:

# getgps --help
Usage: getgps [options]

Simple GPS query tool for the FSO stack

Options:
  --version          show program's version number and exit
  -h, --help         show this help message and exit
  -v, --verbose      verbose mode
  -q, --quiet        suppress normal output
  --fix              check if we have a fix
  -s, --sync-time    set system time from GPS time
  --info             get all GPS information
  --info-connection  get GPS connection information
  --info-fix         get GPS fix information
  --info-position    get GPS position information
  --info-accuracy    get GPS accuracy information
  --info-course      get GPS course information
  --info-time        get GPS time information
  --info-satellite   get GPS satellite information

So finally I can write little GPS-aware scripts:

if getgps --fix -q
then
    start_gps_aware_program
else
    start_gps_normal_program
fi

Or this.

Posted Sun Jun 7 17:59:32 2009 Tags:

Voice-controlled waypoints

I have it in my TODO list to implement taking waypoints when pressing the headset button of the openmoko, but that is not done yet.

In the meantime, I did some experiments with audio mapping, and since I did not manage to enter waypoints while recording them, I was looking for a way to make use of them anyway.

Enter findvoice:

$ ./findvoice  --help
Usage: findvoice [options] wavfile

Find the times in the wav file when there is clear voice among the noise

Options:
  --version             show program's version number and exit
  -h, --help            show this help message and exit
  -v, --verbose         verbose mode
  -p NUM, --percentile=NUM
            percentile to use to discriminate noise from voice
            (default: 90)
  -t, --timestamps      print timestamps instead of human readable information

You give it a wav file, and it will output a list of timestamps corresponding to where it things that you were talking clearly and near the FreeRunner / voice recorder instead of leaving the recorder dangling to pick up background noise.

Its algorithm is crude and improvised because I have no background whatsoever in audio processing, but it basically finds those parts of the audio file where the variance of the samples is above a given percentile: the higher the percentile, the less timestamps you get; the lower the percentile, the more likely it is that it picks a period of louder noise.

For example, you can automatically extract waypoints out of an audio file by using it together with gpxinterpolate:

./findvoice -t today.wav | ./gpxinterpolate today.gpx > today-waypoints.gpx

The timestamps it outputs are computed using the modification time of the .wav file: if your system clock was decently synchronised (which you can do with getgps), then the mtime of the wav is the time of the end of the recording, which gives the needed reference to compute timestamps that are absolute in time.

For example:

getgps --sync-time
arecord file.wav
^C
./findvoice -t file.wav | ./gpxinterpolate today.gpx > today-waypoints.gpx
Posted Sun Jun 7 02:48:40 2009 Tags:

Geocoding Unix timestamps

Geocoding EXIF tags in JPEG images is fun, but there is more that can benefit from interpolating timestamps over a GPX track.

Enter gpxinterpolate:

$ ./gpxinterpolate --help
Usage: gpxinterpolate [options] gpxfile [gpxfile...]

Read one or more GPX files and a list of timestamps on standard input. Output
a GPX file with waypoints at the location of the GPX track at the given
timestamps.

Options:
  --version      show program's version number and exit
  -h, --help     show this help message and exit
  -v, --verbose  verbose mode

For example, you can create waypoints interpolating file modification times:

find . -printf "%Ts %p\n" | ./gpxinterpolate ~/tracks/*.gpx > myfiles.gpx

In case you wonder where you were when you modified or accessed a file, now you can find out.

Posted Sun Jun 7 02:07:43 2009 Tags:

Recording audio on the FreeRunner

The FreeRunner can record audio. It is nice to record audio: for example I can run the recording in background while I keep tangogps in the screen, and take audio notes about where I am while I am doing mapping for OpenStreetMap.

Here is the script that I put together to create geocoded audio notes:

#!/bin/sh

WORKDIR=~/rec
TMPINFO=`mktemp $WORKDIR/info.XXXXXXXX`

# Sync system time and get GPS info
echo "Synchronising system time..."
getgps --sync-time --info > $TMPINFO

# Compute an accurate basename for the files we generate
BASENAME=~/rec/rec-$(date +%Y-%m-%d-%H-%M-%S)
# Then give a proper name to the file with saved info
mv $TMPINFO $BASENAME.info

# Proper mixer settings for recording
echo "Recording..."
alsactl -f /usr/share/openmoko/scenarios/voip-handset.state restore
arecord -D hw -f cd -r 8000 -t wav $BASENAME.wav

echo "Done"

It works like this:

  1. It synchronizes the system time from the GPS (if there is a fix) so that the timestamps on the wav files will be as accurate as possible.
  2. It also gets all sort of information from the GPS and stores them into a file, should you want to inspect it later.
  3. It records audio until it gets interrupted.

The file name of the files that it generates corresponds to the beginning of the recording. The mtime of the wav file obviously corresponds to the end of the recording. This can be used to later georeference the start and end point of the recording.

You can use this to check mixer levels and that you're actually getting any input:

arecord -D hw -f cd -r 8000 -t wav -V mono /dev/null

The getgps script is now described in its own post.

You may now want to experiment, in JOSM, with "Preferences / Audio settings / Modified times (time stamps) of audio files".

Posted Sun Jun 7 01:30:37 2009 Tags:

How to read the Freerunner's accelerometers

This code has been take from moko_eightball by Jakob Westhoff: it just continuously prints the value of the three accelerometers.

#include <stdio.h>
#include <stdint.h>

void processInputEvents(FILE* in)
{
    int x = 0, y = 0, z = 0;
    while (1)
    {
        char padding[16];
        uint16_t type, code;
        int32_t value;

        // Skip the timestamp
        fread(padding, 1, 8, in);

        // Read the type
        fread(&type, 1, 2, in);

        // Read the code
        fread(&code, 1, 2, in);

        // Read the value
        fread(&value, 1, 4, in);

        switch( type )
        {
            case 0:
                switch( code )
                {
                    case 0:
                        fprintf(stdout, "x%d y%d z%d\n", x, y, z);
                        break;
                    default:
                        //warning( "Unknown code ( 0x%02x ) for type 0x%02x\n", code, type );
                        break;
                }
                break;
            case 2:
                switch ( code )
                {
                    case 0:
                        // Update to the new value
                        x = value;
                        break;
                    case 1:
                        // Update to the new value
                        y = value;
                        break;
                    case 2:
                        // Update to the new value
                        z = value;
                        break;
                    default:
                        //warning( "Unknown code ( 0x%02x ) for type 0x%02x\n", code, type );
                        break;
                }
                break;

            default:
                //warning( "Unknown type ( 0x%02x ) in accelerometer input stream\n", type );
                break;
        }


    }
}


int main()
{
    FILE* in = fopen("/dev/input/event2", "r");
    processInputEvents(in);
    fclose(in);
    return 0;
}
Posted Sat Jun 6 00:57:39 2009 Tags:
Posted Sat Jun 6 00:57:39 2009
ppy

Posts for Planet Python.

Custom function decorators with TurboGears 2

I am exposing some library functions using a TurboGears2 controller (see web-api-with-turbogears2). It turns out that some functions return a dict, some a list, some a string, and TurboGears 2 only allows JSON serialisation for dicts.

A simple work-around for this is to wrap the function result into a dict, something like this:

@expose("json")
@validate(validator_dispatcher, error_handler=api_validation_error)
def list_colours(self, filter=None, productID=None, maxResults=100, **kw):
    # Call API
    res = self.engine.list_colours(filter, productID, maxResults)

    # Return result
    return dict(r=res)

It would be nice, however, to have an @webapi() decorator that automatically wraps the function result with the dict:

def webapi(func):
    def dict_wrap(*args, **kw):
        return dict(r=func(*args, **kw))
    return dict_wrap

# ...in the controller...

    @expose("json")
    @validate(validator_dispatcher, error_handler=api_validation_error)
    @webapi
    def list_colours(self, filter=None, productID=None, maxResults=100, **kw):
        # Call API
        res = self.engine.list_colours(filter, productID, maxResults)

        # Return result
        return res

This works, as long as @webapi appears last in the list of decorators. This is because if it appears last it will be the first to wrap the function, and so it will not interfere with the tg.decorators machinery.

Would it be possible to create a decorator that can be put anywhere among the decorator list? Yes, it is possible but tricky, and it gives me the feeling that it may break in any future version of TurboGears:

class webapi(object):
    def __call__(self, func):
        def dict_wrap(*args, **kw):
            return dict(r=func(*args, **kw))
        # Migrate the decoration attribute to our new function
        if hasattr(func, 'decoration'):
            dict_wrap.decoration = func.decoration
            dict_wrap.decoration.controller = dict_wrap
            delattr(func, 'decoration')
        return dict_wrap

# ...in the controller...

    @expose("json")
    @validate(validator_dispatcher, error_handler=api_validation_error)
    @webapi
    def list_colours(self, filter=None, productID=None, maxResults=100, **kw):
        # Call API
        res = self.engine.list_colours(filter, productID, maxResults)

        # Return result
        return res

As a convenience, TurboGears 2 offers, in the decorators module, a way to build decorator "hooks":

class before_validate(_hook_decorator):
    '''A list of callables to be run before validation is performed'''
    hook_name = 'before_validate'

class before_call(_hook_decorator):
    '''A list of callables to be run before the controller method is called'''
    hook_name = 'before_call'

class before_render(_hook_decorator):
    '''A list of callables to be run before the template is rendered'''
    hook_name = 'before_render'

class after_render(_hook_decorator):
    '''A list of callables to be run after the template is rendered.

    Will be run before it is returned returned up the WSGI stack'''

    hook_name = 'after_render'

The way these are invoked can be found in the _perform_call function in tg/controllers.py.

To show an example use of those hooks, let's add a some polygen wisdom to every data structure we return:

class wisdom(decorators.before_render):
    def __init__(self, grammar):
        super(wisdom, self).__init__(self.add_wisdom)
        self.grammar = grammar
    def add_wisdom(self, remainder, params, output):
        from subprocess import Popen, PIPE
        output["wisdom"] = Popen(["polyrun", self.grammar], stdout=PIPE).communicate()[0]

# ...in the controller...

    @wisdom("genius")
    @expose("json")
    @validate(validator_dispatcher, error_handler=api_validation_error)
    def list_colours(self, filter=None, productID=None, maxResults=100, **kw):
        # Call API
        res = self.engine.list_colours(filter, productID, maxResults)
    
        # Return result
        return res

These hooks cannot however be used for what I need, that is, to wrap the result inside a dict. The reason is because they are called in this way:

        controller.decoration.run_hooks(
                'before_render', remainder, params, output)

and not in this way:

        output = controller.decoration.run_hooks(
                'before_render', remainder, params, output)

So it is possible to modify the output (if it is a mutable structure) but not to exchange it with something else.

Can we do even better? Sure we can. We can assimilate @expose and @validate inside @webapi to avoid repeating those same many decorator lines over and over again:

class webapi(object):
    def __init__(self, error_handler = None):
        self.error_handler = error_handler

    def __call__(self, func):
        def dict_wrap(*args, **kw):
            return dict(r=func(*args, **kw))
        res = expose("json")(dict_wrap)
        res = validate(validator_dispatcher, error_handler=self.error_handler)(res)
        return res

# ...in the controller...

    @expose("json")
    def api_validation_error(self, **kw):
        pylons.response.status = "400 Error"
        return dict(e="validation error on input fields", form_errors=pylons.c.form_errors)

    @webapi(error_handler=api_validation_error)
    def list_colours(self, filter=None, productID=None, maxResults=100, **kw):
        # Call API
        res = self.engine.list_colours(filter, productID, maxResults)

        # Return result
        return res

This got rid of @expose and @validate, and provides almost all the default values that I need. Unfortunately I could not find out how to access api_validation_error from the decorator so that I can pass it to the validator, therefore I remain with the inconvenience of having to explicitly pass it every time.

Posted Wed Nov 4 17:52:38 2009 Tags:

Building a web-based API with Turbogears2

I am using TurboGears2 to export a python API over the web. Every API method is wrapper by a controller method that validates the parameters and returns the results encoded in JSON.

The basic idea is this:

@expose("json")
def list_colours(self, filter=None, productID=None, maxResults=100, **kw):
    # Call API
    res = self.engine.list_colours(filter, productID, maxResults)

    # Return result
    return res

To validate the parameters we can use forms, it's their job after all:

class ListColoursForm(TableForm):
    fields = [
            # One field per parameter
            twf.TextField("filter", help_text="Please enter the string to use as a filter"),
            twf.TextField("productID", help_text="Please enter the product ID"),
            twf.TextField("maxResults", validator=twfv.Int(min=0), default=200, size=5, help_text="Please enter the maximum number of results"),
    ]
list_colours_form=ListColoursForm()

#...

    @expose("json")
    @validate(list_colours_form, error_handler=list_colours_validation_error)
    def list_colours(self, filter=None, productID=None, maxResults=100, **kw):
        # Parameter validation is done by the form
    
        # Call API
        res = self.engine.list_colours(filter, productID, maxResults)
    
        # Return result
        return res

All straightforward so far. However, this means that we need two exposed methods for every API call: one for the API call and one error handler. For every API call, we have to type the name several times, which is error prone and risks to get things mixed up.

We can however have a single error handler for all methonds:

def get_method():
    '''
    The method name is the first url component after the controller name that
    does not start with 'test'
    '''
    found_controller = False
    for name in pylons.c.url.split("/"):
        if not found_controller and name == "controllername":
            found_controller = True
            continue
        if name.startswith("test"):
            continue
        if found_controller:
            return name
    return None

class ValidatorDispatcher:
    '''
    Validate using the right form according to the value of the "method" field
    '''
    def validate(self, args, state):
        method = args.get("method", None)
    # Extract the method from the URL if it is missing
        if method is None:
            method = get_method()
            args["method"] = method
        return forms[method].validate(args, state)

validator_dispatcher = ValidatorDispatcher()

This validator will try to find the method name, either as a form field or by parsing the URL. It will then use the method name to find the form to use for validation, and pass control to the validate method of that form.

We then need to add an extra "method" field to our forms, and arrange the forms inside a dictionary:

class ListColoursForm(TableForm):
    fields = [
            # One hidden field to have a place for the method name
            twf.HiddenField("method")
            # One field per parameter
            twf.TextField("filter", help_text="Please enter the string to use as a filter"),
    #...

forms["list_colours"] = ListColoursForm()

And now our methods become much nicer to write:

    @expose("json")
    def api_validation_error(self, **kw):
        pylons.response.status = "400 Error"
        return dict(form_errors=pylons.c.form_errors)

    @expose("json")
    @validate(validator_dispatcher, error_handler=api_validation_error)
    def list_colours(self, filter=None, productID=None, maxResults=100, **kw):
        # Parameter validation is done by the form
    
        # Call API
        res = self.engine.list_colours(filter, productID, maxResults)
    
        # Return result
        return res

api_validation_error is interesting: it returns a proper HTTP error status, and a JSON body with the details of the error, taken straight from the form validators. It took me a while to find out that the form errors are in pylons.c.form_errors (and for reference, the form values are in pylons.c.form_values). pylons.response is a WebOb Response that we can play with.

So now our client side is able to call the API methods, and get a proper error if it calls them wrong.

But now that we have the forms ready, it doesn't take much to display them in web pages as well:

def _describe(self, method):
    "Return a dict describing an API method"
    ldesc = getattr(self.engine, method).__doc__.strip()
    sdesc = ldesc.split("\n")[0]
    return dict(name=method, sdesc = sdesc, ldesc = ldesc)

@expose("myappserver.templates.myappapi")
def index(self):
    '''
    Show an index of exported API methods
    '''
    methods = dict()
    for m in forms.keys():
        methods[m] = self._describe(m)
    return dict(methods=methods)

@expose('myappserver.templates.testform')
def testform(self, method, **kw):
    '''
    Show a form with the parameters of an API method
    '''
    kw["method"] = method
    return dict(method=method, action="/myapp/test/"+method, value=kw, info=self._describe(method), form=forms[method])

@expose(content_type="text/plain")
@validate(validator_dispatcher, error_handler=testform)
def test(self, method, **kw):
    '''
    Run an API method and show its prettyprinted result
    '''
    res = getattr(self, str(method))(**kw)
    return pprint.pformat(res)

In a few lines, we have all we need: an index of the API methods (including their documentation taken from the docstrings!), and for each method a form to invoke it and a page to see the results.

Make the forms children of AjaxForm, and you can even see the results together with the form.

Posted Thu Oct 15 15:45:39 2009 Tags:

Creating pipelines with subprocess

It is possible to create process pipelines using subprocess.Popen, by just using stdout=subprocess.PIPE and stdin=otherproc.stdout.

Almost.

In a pipeline created in this way, the stdout of all processes except the last is opened twice: once in the script that has run the subprocess and another time in the standard input of the next process in the pipeline.

This is a problem because if a process closes its stdin, the previous process in the pipeline does not get SIGPIPE when trying to write to its stdout, because that pipe is still open on the caller process. If this happens, a wait on that process will hang forever: the child process waits for the parent to read its stdout, the parent process waits for the child process to exit.

The trick is to close the stdout of each process in the pipeline except the last just after creating them:

#!/usr/bin/python
# coding=utf-8

import subprocess

def pipe(*args):
    '''
    Takes as parameters several dicts, each with the same
    parameters passed to popen.

    Runs the various processes in a pipeline, connecting
    the stdout of every process except the last with the
    stdin of the next process.
    '''
    if len(args) < 2:
        raise ValueError, "pipe needs at least 2 processes"
    # Set stdout=PIPE in every subprocess except the last
    for i in args[:-1]:
        i["stdout"] = subprocess.PIPE

    # Runs all subprocesses connecting stdins and stdouts to create the
    # pipeline. Closes stdouts to avoid deadlocks.
    popens = [subprocess.Popen(**args[0])]
    for i in range(1,len(args)):
        args[i]["stdin"] = popens[i-1].stdout
        popens.append(subprocess.Popen(**args[i]))
        popens[i-1].stdout.close()

    # Returns the array of subprocesses just created
    return popens

At this point, it's nice to write a function that waits for the whole pipeline to terminate and returns an array of result codes:

def pipe_wait(popens):
    '''
    Given an array of Popen objects returned by the
    pipe method, wait for all processes to terminate
    and return the array with their return values.
    '''
    results = [0] * len(popens)
    while popens:
        last = popens.pop(-1)
        results[len(popens)] = last.wait()
    return results

And, look and behold, we can now easily run a pipeline and get the return codes of every single process in it:

process1 = dict(args='sleep 1; grep line2 testfile', shell=True)
process2 = dict(args='awk \'{print $3}\'', shell=True)
process3 = dict(args='true', shell=True)
popens = pipe(process1, process2, process3)
result = pipe_wait(popens)
print result

Update: Colin Watson suggests an improvement to compensate for Python's nonstandard SIGPIPE handling.

Colin Watson has a similar library for C.

Posted Wed Jul 1 09:08:06 2009 Tags:

Turbogears i18n quirks

Collecting strings from .kid files

tg-admin i18n collect won't collect strings from your .kid files: you need the toolbox web interface for that.

Indentation problems in .kid files

The toolbox web interface chokes on intentation errors on your .kid files.

To see the name of the .kid file that causes the error, look at the tg-admin toolbox output in the terminal for lines like Working on app/Foo/templates/bar.kid.

What happens is that the .kid files are converted to python files, and if there are indentation glitches they end up in the python files, and python will complain.

Once you see from the tg-admin toolbox standard error what is the .kid file with the problem, edit it and try to make sure that all closing tags are at the exact indentation level as their coresponding opening tags. Even a single space would matter.

Bad i18n bug in TurboKid versions earlier than 1.0.1

faide on #turbogears also says:

It is of outmost importance that you use TurboKid 1.0.1 because it is the first version that corrects a BIG bug regarding i18n filters ...

The version below had a bug where the filters kept being added at each page load in such a way that after a few hundreds of pages you could have page loading times as long as 5 minutes!

If one has a previous version of TurboKid, one (and only one) of these is needed:

So, in short, all i18n users should upgrade to TurboGears 1.0.2.2 or patch TurboKid using http://trac.turbogears.org/ticket/1301.

Posted Sat Jun 6 00:57:39 2009 Tags:

TurboGears RemoteForm tip

In case your RemoteForm misteriously behaves like a normal HTTP form, refreshing the page on submit, and the only hint that there's something wrong is this bit in the Iceweasel's error console:

Errore: uncaught exception: [Exception... "Component returned failure
code: 0x80070057 (NS_ERROR_ILLEGAL_VALUE) [nsIXMLHttpRequest.open]"
nsresult: "0x80070057 (NS_ERROR_ILLEGAL_VALUE)"  location: "JS frame ::
javascript: eval(__firebugTemp__); :: anonymous :: line 1"  data: no]

the problem can just be a missing action= attribute to the form.

I found out after:

  1. reading the TurboGears remoteform wiki: "For some reason, the RemoteForm is acting like a regular html form, serving up a new page instead of performing the replacements we're looking for. I'll update this page as soon as I figure out why this is happening."

  2. finding this page on Google and meditating for a while while staring at it. I don't speak German, but often enough I manage to solve problems after meditating over Google results in all sorts of languages unknown or unreadable to me. I will call this practice Webomancy.

Posted Sat Jun 6 00:57:39 2009 Tags:

Linking to self in turbogears

I want to put in my master.kid some icons that allow to change the current language for the session.

First, all user-accessible methods need to handle a 'language' parameter:

@expose(template="myapp.templates.foobar")
def index(self, someparam, **kw):
    if 'language' in kw: turbogears.i18n.set_session_locale(kw['language'])

Then, we need a way to edit the current URL so that we can generate modified links to self that preserve the existing path_info and query parameters. In your main controller, add:

def linkself(**kw):
    params = {}
    params.update(cherrypy.request.params)
    params.update(kw)
    url = cherrypy.request.browser_url.split('?', 1)[0]
    return url + '?' + '&'.join(['='.join(x) for x in params.iteritems()])

def add_custom_stdvars(vars):
    return vars.update({"linkself": linkself})

turbogears.view.variable_providers.append(add_custom_stdvars)

(see the turbogears stdvars documentation and the cherrypy request documentation (cherrypy 2 documentation at the bottom of the page))

And finally, in master.kid:

<div id="footer">
  <div id="langselector">
    <span class="language">
      <a href="${tg.linkself(language='it_IT')}">
        <img src="${tg.url('/static/images/it.png')}"/>
      </a>
    </span>

    <span class="language">
      <a href="${tg.linkself(language='C')}">
        <img src="${tg.url('/static/images/en.png')}"/>
      </a>
    </span>
  </div><!-- langselector -->
</div><!-- footer -->
Posted Sat Jun 6 00:57:39 2009 Tags:

Python scoping

How do you create a list of similar functions in Python?

As a simple example, let's say we want to create an array of 10 elements like this:

a[0] = lambda x: x
a[1] = lambda x: x+1
a[2] = lambda x: x+2
...
a[9] = lambda x: x+9

Simple:

>>> a = []
>>> for i in range(0,10): a.append(lambda x: x+i)
...

...but wrong:

>>> a[0](1)
10

What happened here? In Python, that lambda x: x+i uses the value that i will have when the function is invoked.

This is the trick to get it right:

>>> a = []
>>> for i in range(0,10): a.append(lambda x, i=i: x + i)
...
>>> a[0](1)
1

What happens here is explained in the section "A Jedi Mind Trick" of the Instant Python article: i=i assigns as the default value of the parameter i the current value of i.

Strangely enough the same article has "A Note About Python 2.1 and Nested Scopes" which seems to imply that from Python 2.2 the scoping has changed to "work as it should". I don't understand: the examples above are run on Python 2.4.4.

Googling for keywords related to python closure scoping only yields various sorts of complicated PEPs and an even uglier list trick:

a lot of people might not know about the trick of using a list to box variables within a closure.

Now I know about the trick, but I wish I didn't need to know :-(

Posted Sat Jun 6 00:57:39 2009 Tags:

File downloads with TurboGears

In TurboGears, I had to implement a file download method, but the file required access controls so it was put in a directory not exported by Apache.

In #turbogears I've been pointed at: http://cherrypy.org/wiki/FileDownload and this is everything put together:

from cherrypy.lib.cptools import serveFile
# In cherrypy 3 it should be:
#from cherrypy.lib.static import serve_file

@expose()
def get(self, *args, **kw):
    """Access the file pointed by the given path"""
    pathname = check_auth_and_compute_pathname()
    return serveFile(pathname)

Then I needed to export some CSV:

@expose()
def getcsv(self, *args, **kw):
    """Get the data in CSV format"""
    rows = compute_data_rows()
    headers = compute_headers(rows)
    filename = compute_file_name()

    cherrypy.response.headers['Content-Type'] = "application/x-download"
    cherrypy.response.headers['Content-Disposition'] = 'attachment; filename="'+filename+'"'

    csvdata = StringIO.StringIO()
    writer = csv.writer(csvdata)
    writer.writerow(headers)
    writer.writerows(rows)

    return csvdata.getvalue()

In my case it's not an issue as I can only compute the headers after I computed all the data, but I still have to find out how to serve the CSV file while I'm generating it, instead of storing it all into a big string and returning the big string.

Posted Sat Jun 6 00:57:39 2009 Tags:

Turbogears form quirk

I had a great idea:

@validate(model_form)
@error_handler()
@expose(template='kid:myproject.templates.new')
def new(self, id, tg_errors=None, **kw):
    """Create new records in model"""
    if tg_errors:
        # Ask until there is still something missing
        return dict(record = defaults, form = model_form)
    else:
        # We have everything: save it
        i = Item(**kw)
        flash("Item was successfully created.")
        raise redirect("../show/%d" % i.id)

It was perfect: one simple method, simple error handling, nice helpful messages all around. Except, check boxes and select fields would not get the default values while all other fields would.

After two hours searching and cursing and tracing things into widget code, I found this bit in InputWidget.adjust_value:

# there are some input fields that when nothing is checked/selected
# instead of sending a nice name="" are totally missing from
# input_values, this little workaround let's us manage them nicely
# without interfering with other types of fields, we need this to
# keep track of their empty status otherwise if the form is going to be
# redisplayed for some errors they end up to use their defaults values
# instead of being empty since FE doesn't validate a failing Schema.
# posterity note: this is also why we need if_missing=None in
# validators.Schema, see ticket #696.

So, what is happening here is that since check boxes and option fields don't have a nice behaviour when unselected, turbogears has to work around it. So in order to detect the difference between "I selected 'None'" and "I didn't select anything", it reasons that if the input has been validated, then the user has made some selections, so it defaults to "The user selected 'None'". If the input has not been validated, then we're showing the form for the first time, then a missing value means "Use the default provided".

Since I was doing the validation all the time, this meant that Checkboxes and Select fields would never use the default values.

Hence, if you use those fields then you necessarily need two different controller methods, one to present the form and one to save it:

@expose(template='kid:myproject.templates.new')
def new(self, id, **kw):
    """Create new records in model"""
    return dict(record = defaults(), form = model_form)

@validate(model_form)
@error_handler(new)
@expose()
def savenew(self, id, **kw):
    """Create new records in model"""
    i = Item(**kw)
    flash("Item was successfully created.")
    raise redirect("../show/%d"%i.id)

If someone else stumbles on the same problem, I hope they'll find this post and they won't have to spend another two awful hours tracking it down again.

Posted Sat Jun 6 00:57:39 2009 Tags:

Quirks when overriding SQLObject setters

Let's suppose you have a User that is, optionally, a member of a Company. In SQLObject you model it somehow like this:

    class Company(SQLObject):
        name = UnicodeCol(length=16, alternateID=True, alternateMethodName="by_name")
        display_name = UnicodeCol(length=255)

    class User(InheritableSQLObject):
        company = ForeignKey("Company", notNull=False, cascade='null')

Then you want to implement a user settings interface that uses a Select box to choose the company of the user.

For the Select widget to properly handle the validator for your data, you need to put a number in the first option. As my first option, I want to have the "None" entry, so I decided to use -1 to mean "None".

Now, to make it all blend nicely, I overrode the company setter to accept -1 and silently convert it to a None:

    class User(InheritableSQLObject):
        company = ForeignKey("Company", notNull=False, cascade='null')

        def _set_company(self, id):
            "Set the company id, using None if -1 is given"
            if id == -1: id = None
            self._SO_set_company(id)

In the controller, after parsing and validating all the various keyword arguments, I do something like this:

            user.set(**kw)

Now, the overridden method didn't get called.

After some investigation, and with the help of NandoFlorestan on IRC, we figured out the following things:

  1. That method needs to be rewritten as _set_companyID:

            def _set_companyID(self, id):
                "Set the company id, using None if -1 is given"
                if id == -1: id = None
                self._SO_set_companyID(id)
    
  2. Methods overridden in that way are alsop called by user.set(**kw), but not by the User(**kw) constructor, so using, for example, a similar override to transparently encrypt passwords would give you plaintext passwords for new users and encrypted passwords after they changed it.

Posted Sat Jun 6 00:57:39 2009 Tags:
Posted Sat Jun 6 00:57:39 2009