Flogs
Nextgen$
Bombe
evanbd
Graphs
digger3's stats graphs
Freenet fetch/pull stats
Contact
Homepage
gpg
email: toad@toselandcs.co.uk
irc.freenode.net: toad_

Toad's old blog

This is my old static blog which was mirrored to Freenet and mostly about that. Links starting with /CHK@ or /USK@ etc are Freenet 0.7 links and won't work from the web. Nowadays I mostly write somewhere else, but consider it personal so I'm not going to link it here. This blog is 1) very old, 2) a mixture of work and personal and 3) I consider it safe for most employers to see. PS Covid is not over!

Job hunting

I'm hunting jobs again! Let me know if you have remote work, ideally related to the climate crisis using C++, C, Java etc.

My 2015 internship on LowRISC

In 2015 I did a short internship for the LowRISC project at Cambridge. The objective was to investigate whether a short hardware tag for each 64-bit block of memory could eliminate some memory attacks. I don't think it's the right approach in retrospect, but I'd like to link it here.

Source code: Various repositories on my Github.

Docs: source (pdf)

Resources page

2020/05/08

Some links on the NHSX Covid-19 app

Tags: covid, android, privacy, decentralisation

Since leaving Freenet, I went to get a degree at Cambridge, then got a job with StarLeaf, who make awesome video conferencing solutions. I'm in the happy position of having a job and being able to work from home... and in fact classed as a key worker, though that's highly dubious, and anyway I can work from home.

I've been vaguely watching the situation with the NHSX app. The privacy issues are interesting, but well-covered elsewhere, and there are some technical issues, that I may or may not have run into in other contexts.

The source code and some of the development docs have been published:

One interesting technical question is how they translate signal strength into proximity with different devices. Bluetooth Low Energy signal strengths vary enormously from one phone to another, some of them just have better aerials than others. So the measured RSSI, the "signal strength", does not necessarily tell you the distance between two phones. Some early papers just glossed over this - one of them was based in a university where everyone seemed to have exactly the same university-supplied phone. It looks like the NHS centralised approach does solve this:

  • On registering a new user, they include your post code (which the app asks for, purely for tracking the outbreak), and your device model. (RegistrationUseCase.kl)
  • They regularly fetch a new blind token to broadcast in BTLE.
  • On reporting symptoms, it offers to report Bluetooth traces for the last 28 days to their server, including the time, duration, and regular signal strength traces.

This allows the server to run some stats to try to build a database of signal strength by device, so they can try to compensate for that. Apparently they did test this, though what exactly the results were was not given in the docs - probably just an indication that they need such a database.

It is not clear how Google/Apple's decentralised approach solves this particular problem. They give the individual health authorities the measured signal strength, do they include the device model for both users' phones, or have they built their own signal strength database? Digging so far does not include this information. If they don't, expect a lot more false positives or false negatives... meaning people being told to isolate when they're not at risk, or missing out on people who don't need to isolate. Even with such a database, we have to assume that radio makes this distinctly fuzzy - phone in back pocket vs in hand etc. But the main reason for false negatives is getting enough people to use the app, which seems unlikely at this stage...

How does that affect the centralised vs decentralised model question? In countries which publish an app using the standard APIs, it will likely be used much more widely; the key components are pushed automatically the app stores, although you still have to install the per-country app to get notified and send in a report of symptoms.

I agree with the standard criticisms; it's basically impossible to fully anonymise this sort of data. The social graph can be mapped to other social graphs. All we can do is ask for legislative guarantees that it won't be sold to marketers. Which does matter - some of them are trying to sell you things like Boris Johnson. The tinfoil hat brigade can go home though - Google already have your location history. So we'll see. For more analysis see the mainstream media.

In any case, platform limitations are a real problem - even in an Android-only world, foreground notifications can still get killed if the user is using other services, and using a notification to restart the app is really very naughty and not going to work well on Android 9 onwards. On iOS it's much worse. So the NHS will probably have to switch to using Google's and Apple's systems. Interesting example of how US political realities determine the rest of the world's fate. I hope they have a good answer for radio accuracy issues. Also, it illustrates the general principle that centralised implementations are usually easier - but they create a different set of problems.

Any advice here? Nothing you haven't heard already. Follow the government advice. Stay home, wash your hands, self isolate if you have symptoms etc. I'll probably install the app on one of my phones, though I tend to have a cough most of the time anyway so I'm very cautious. We'll probably get better privacy with the Google/Apple version, but the NHSX one could work - if we have the right guarantees from the politicians. Google and Apple already have loads of data about you anyway, and so does the government. And the NHS takes privacy pretty seriously on the whole - it's the politicians who periodically try to sell our data. The app likely won't be entirely accurate: some people won't be notified even though they were close to somebody infected, and some people will be told to self-isolate when they don't need to. Which is why there's also going to be manual contact tracing. Is it enough to lift the lockdown? That depends on the data - how good it is in practice. Thankfully not my call.

Meanwhile direct your paranoia in more productive directions! Now is the perfect time to hide bad news, and the recovery is the perfect time to bail out obsolete, destructive, unsustainable and long-term unprofitable fossil fuel dinosaurs. Government via video isn't really happening yet, but it'll be interesting to see; the problems with electronic voting aren't quite the same for legislators as they are for public elections, since your MP's votes are public knowledge.

Take it seriously. Stay safe, and avoid spreading the disease if you can: it's not just your safey that matters. Some of us are going to get through this.

2018/11/10

A straightforward rant

I don't like to pick fights with old people doing what they believe is right. But remember that most people who die in modern wars are civilians. And many of those wars are blatent acts of illegal aggression. For example, in Iraq:

  • 179 British soldiers
  • 4,424 American soldiers
  • 1,487 US contractors
  • ~ 26,000 insurgents
  • 182,522+ documented civilian deaths (some estimates are much bigger)

WWI:

  • 15-19 million military deaths
  • ~ 40 million total
  • Another 20 million in the Spanish flu the following year, to which the war is widely believed to have been a contributing factor.

Both wars were the direct result of colossal incompetence and imperial warmongering on the part of politicians, and at least in WWI the senior military too. And WWI led directly to WWII; the only reason there wasn't a third was that on August 6th 1945 we learned that we really can't afford one.

For context, 4.6 million people die every year from air pollution (indoor and outdoor), which is largely preventable. Numbers aren't everything. But while I support the existence of the armed forces and believe the government should look after them and their families (including foreign contractors), I won't be wearing a red poppy.

2016/07/15

Part 2 project

My third year project was on improving the performance and reliability of Freenet simulations. See my dissertation for documentation, and the branch fast-simulator-insert-on-multiple-nodes on my repository (pull request or source code).

I have finally finished my degree. I may or may not do a Master's. If I do, my project will probably be related to CHERI or LowRISC, i.e. hardware assisted security (128-bit compressed fat pointers/capabilities are looking very promising). I spent last summer hacking LLVM to try to prevent control flow hijacking using tagged memory on LowRISC; see the details here. Ultimately it was a proof of concept and that approach (tagging code pointers etc) has some serious limitations while it may be quite costly, so I'm leaning towards CHERI (capabilities) being the better answer, which provides full memory safety and wipes out several huge classes of remote code execution attacks. Even so, LowRISC is very cool: it's an effort to build, and ship, open source System-on-Chip's using the RISC-V ISA. Current (Rocket) cores are relatively low-end but should include some interesting features such as minion cores and tagged memory. And there will be actual silicon!

I still think computer security is hopeless in practice for the next 20 years or so, but building more secure systems is worthwhile, even if politically and economically they cannot be deployed until society is ready for a grown-up debate over surveillance, covert versus overt backdoors, and the apparent need to be able to hack everyone all at once. Not to mention the fact that the business model of most of the Internet relies on nobody having any privacy. But if there is no computer security there certainly won't be any privacy: it's a necessary but not sufficient condition. Right now there is no security and no privacy, and certainly no anonymity. Assuming that your threat model includes people like agencies - and in many important cases it should.

At the moment I should probably not be involved in Freenet, or any other open source project, even in my own free time, because under the terms of my current employment it would potentially compromise any project I am involved in. So that's that. Nextgens and Steve seem to be making good progress, and the world has changed considerably since I first started working on Freenet. Have fun folks. Please do not contact me about any technical issues in Freenet that might conceivably be patentable, for the time being, and note that more or less every useful idea is patentable!

As for the rest, the world is going to hell, but I will survive. For now anyway.

2014/09/26

Final release candidate for purge-db4o

The snapshot below is feature complete, and reasonably well tested. Hopefully it will be merged and deployed soon. It certainly needs more testing. See the post a few pages down for explanation of exactly what this does: The client layer is rewritten, downloads and uploads are stored in a new way, hopefully resulting in much more reliability and less disk I/O. Also lots of bugs fixed.

The latest version is here (github).

git@github.com:freenet/fred.git

Branch "purge-db4o-crypt"

Signed tag "purge-db4o-snapshot-28"

Commit hash d7ad6fccb89e8f36b3711c3fdb82b8ec3f92fc50

Jar file and signature.

Source code and signature.

Bouncycastle crypto library.

You will need to download the new jar (save it as freenet.jar), and the bcprov library, and edit wrapper.conf to point to them. See below for more detailed instructions.

Also, you should backup your master.keys, node.db4o[.crypt] and persistent-temp* before trying the new code. It should be fairly stable now and there shouldn't be any more format changes. Previous snapshots may not be compatible with this snapshot however, so you might lose your downloads/uploads if you've been testing previous snapshots. If you want to switch back to mainline fred and keep your old downloads you will need to restore from backups, hence the advice to backup before trying it out (and it may take a while for the code to be merged given its size, so you probably will want to switch back to mainline fred at some point).

This hasn't been all I've done on Freenet this summer; I have also contributed to the link length fix, which seems to be making a big difference to data persistence and performance (thanks mainly to the volunteers involved and especially the authors of the paper on opennet performance!), and posted a couple of implementations of old load management ideas that need to be tested. Also the purge-db4o-crypto code above includes lots and lots of bug fixes, mostly to the client layer, though no doubt it introduces new bugs too.

Thanks! I will be on holiday for a few days and then back to university, so I won't be working for Freenet in a paid capacity, although I will likely respond to reasonable queries. I may come back to Freenet next summer, depending on what other opportunities arise. Have a great year!

2014/09/23

Another release candidate for purge-db4o

The client layer rewrite aka purge-db4o is finished. That is, it is feature complete, and has had a reasonable amount of testing (but needs more).

Major changes:

  • No longer uses db4o to store anything, but will migrate from old node.db4o*'s (downloads and uploads will be restarted)
  • Should be much more reliable, and cause much less disk I/O
  • Uses the multi-container code even for persistent inserts
  • Improved disk crypto (I may improve this further)
  • Trivial changes to FCP
  • New metadata format (fixes some minor bugs, but we use the old one for now by default)
  • Handles low disk space much better. Migration may use significant disk space; if we have less than 512MB left, transient requests will fail to avoid breaking the system, if we have less than 1GB left, persistent requests will fail / not start.
  • Various bugfixes
  • Rename RandomAccessThing to RandomAccessBuffer, etc.

The latest version is here (github).

git@github.com:freenet/fred.git

Branch "purge-db4o-crypt"

Signed tag "purge-db4o-snapshot-24"

Commit hash bc5ce630c68db657bfcc726e7591d9ec9e88ae53

Jar file and signature.

Source code and signature.

Bouncycastle crypto library (needed if using disk encryption i.e. physical security level of LOW or higher).

Please test, and please consider for integration. Thanks!

I will do some more testing and might do better password encryption, but I will have to stop working on Freenet within the next few days because of uni... I will answer any reasonable queries by email even then.

TESTING ADVICE

It will automatically migrate from node.db4o*, and hopefully we won't need to make further incompatible changes, but it's possible e.g. if operhiem1 doesn't like the class names. Also there may be more bugs, including in migration. So ...

  • Download the jar and sig file, and check the signature.
  • Download the bouncycastle library, either from the link above or the file bcprov-jdk15on-151.jar from their website, and save it in your Freenet folder (in the same folder as the previous version of the jar, bcprov-jdk15on-149.jar).
  • Shutdown the node.
  • Backup master.keys, node.db4o or node.db4o.crypt, and the whole persistent-temp* folder (e.g. persistent-temp-1234).
  • Edit wrapper.conf to make sure that we are running freenet.jar not freenet.jar.new:
    wrapper.java.classpath.1=freenet.jar
  • And that you use the new bouncycastle library:
    wrapper.java.classpath.2=bcprov-jdk15on-151.jar
  • Replace freenet.jar with the downloaded jar.
  • Start Freenet.
  • Test it! The changes affect anything you do in terms of downloads, uploads, site uploads, fproxy, etc.

NOTE: This is not compatible with previous snapshots, so you may have to restore the backups of your node.db4o* etc and then re-migrate if you have been testing previously.

NOTE: It *is* compatible with disk encryption (physical security level above NONE, "encrypt temp buckets" / "encrypt persistent temp buckets" enabled in advanced config). Please test this, this is the newest riskiest bit!

PLUGINS

Plugins have not been updated. There are updated versions available but they may need further changes after the refactoring in snapshot 19.

Enjoy! And please let me know if you find any bugs in the next few days.

2014/09/16

Nearly there...

New readers: Read the previous post to see what purge-db4o is and what this is all about...

Please test the latest snapshot of purge-db4o. This will automatically migrate your old downloads and uploads, but you should back them up first (copy master.keys, node.db4o* and persistent-temp*). The only thing it doesn't support is disk encryption, and there is some dispute over whether we want to keep that feature. It includes a new metadata type, COMPAT_1465, but it defaults to the old COMPAT_1416.

The jar file (signature). Source code (signature).

There has been a lot of discussion on Frost, amid the usual racist slurs and other nonsense, about why purge-db4o needs a new metadata type. The answer is that in the process of rewriting large parts of the client layer I discovered some bugs in how we use metadata for storing files ("splitfiles") on Freenet. The metadata that describes how to fetch a file includes some "top blocks" fields, so that we can determine accurately how big it is, how to reinsert it etc. One of these fields contains the "compatibility mode" for the file (e.g. COMPAT_1416). This is an overall, bug-for-bug-compatible version number which tells the node how exactly to do an insert: What kind of keys to use, exactly how to split up a file into segments, and so on. For example, a 129 block file should be split up into two segments of roughly equal size, rather than (as old versions of Freenet did) a segment of 128 blocks and then a segment of 1 block; but if we are reinserting an old file, we want to produce the same key, so we should divide it up in the old way. Another of the "top block" metadata fields indicates whether compression was turned off when we uploaded the data. Hence it is possible to upload the same data and get exactly the same CHK, provided we use the same settings, which are included in the metadata. Unfortunately, we were not filling in the compatibility mode or the don't-compress flag (on smaller splitfiles). I fixed this, but it results in new keys. Since we have new keys, we need a new compatibility mode (COMPAT_1466), so that reinserts of files inserted with 1416 to 1465 give the same CHK. And then we run into another metadata bug: If the compatibility mode is something the node doesn't know about, it fails to fetch the data (even though it's only included for convenient reinserts; there are separate versions for stuff that would make it hard to read). I have fixed this in new code, but existing 1465 or earlier nodes will fail to fetch data inserted as COMPAT_1466; so 1466 (or whichever build includes purge-db4o) will by default insert data as COMPAT_1416. This is a temporary measure and will change in the following build.

None of this really matters, and the reason I've posted test builds is precisely to find such bugs by having a larger number of people testing the pre-release code. These are not even official snapshots; update.sh won't fetch them, you have to download the jars manually. (Although if Steve's update channels mechanism was working you might be able to use that in a similar situation in future). Release early, release often; and a lot of users (such as TheSeeker) will happily try the new code and find bugs in it, which can then be fixed before release. Also, note that auto-updates are inserted with COMPAT_1416, and that fixed compatibility mode is only changed when the auto-update key is updated, so updates will work in any case, although they will be tested before releasing a build with purge-db4o merged.

I am confident that people will continue to find things to whine about. But since there were legitimate questions to answer I have answered them. Also, purge-db4o will be well worth the wait; it solves a number of long-lived bugs as well as reducing disk I/O and greatly increasing reliability (at least for downloads) relative to the old situation where queues would regularly corrupt themselves for many users. Please test it if you can.

IMHO purge-db4o is working fairly well at this point, and hopefully it can be merged soon. It was originally the plan to include support for disk encryption ("physical security level" of NORMAL, HIGH or MAXIMUM). Lately there has been some discussion of whether this is a good idea; I am not sure about this. The unix philosophy is a reasonable argument when considering tools for geeks, but "usable security" is IMHO more important. Input welcome on this... I have posted a message explaining the major arguments as I see them. Unfortunately, "use TAILS" is not really a good answer for Freenet, because we want users to run Freenet in the background for long periods of time (this is essential for darknet to work well).

On 05/09/14 19:43,
localghost at eOC4Zm8KjRpMFhNBp6DmI8K4URaq8bQZH45y0dLHEnI wrote:>
creamsoda at 0vpcRHZV1ftyj4mJpZnuYaG8wpkNIvf3qa3b-LUcsZs wrote :
>>
>> TAILS is meant to be for short-lived sessions of minutes to hours
right? That doesn't lend itself well to freenet which works better over
a longer time.
>
> Ya, I kinda figured that.. just curious if anybody had done it. I was
asking for a friend, as stated. I prefer to let Freenet run constantly,
to help the network- and have not used TAILS, personally..
>
This is exactly why "leave disk crypto to the operating system" isn't so
obviously the right policy for Freenet.

The arguments for not doing disk crypto:
- We're likely to get it wrong.
- If they download video files etc there will probably be leaks. (But it
IS possible to limit this)
- Memorising a good password is hard, and the users who are willing to
do so may be the same group as the users who will install a secure linux
distro just to run Freenet, or at least do full disk encryption on
Windows (presumably using BitLocker?)
- If we do disk crypto we need to turn on swap encryption. This is
trivial on recent Windows but arguably not a good idea.

The arguments for doing disk crypto:
- We want people to run Freenet long-term. They usually won't install a
new OS just to run Freenet, and they can't run it long-term from a
livecd. This is one of the reasons we support Windows!
- People who are willing to remember a long password may not have the
technical know-how to set up a secure system.
- People who install Freenet casually and then discover something
interesting may not want to reinstall at that point.
- Tempfiles and other evidence of your past browsing should not be
recoverable even if you *have* the password. Most full disk crypto
setups don't have this property, but some do (e.g. /tmp on encrypted swap).
- We can do some interesting things in the long run such as combining a
password with a local nonce and data fetched from the network to decrypt
local data, giving "hidden identities" e.g. for WoT/FMS.
- In practice physical attacks are likely to be more common than network
attacks, in most environments.

A long term solution might be to have a separate node (which doesn't
store anything private), possibly on dedicated hardware, and client
(which stores your downloads and runs from a livecd or whatever, and
uses tunnels through the node). But we need a decision in the short run.

I used to think that disk crypto in Freenet made sense. I'm half
convinced by the "good passwords are hard" thing. operhiem1 (Steve) is
in favour of removing it, as is nextgens and most of the old guard (unix
philosophy etc). I'm not convinced one way or the other. I will defer;
I'll be away very soon; but it's worth discussing again maybe...

Technically keeping passworded disk crypto isn't particularly difficult:
- Review and merge the crypto pull request. We need this anyway.
- Integrate with purge-db4o. (Fairly easy)
- Use a proper iterated password function.
- Think about changing the UI e.g. ask for a password when first start a
download? But what about client-cache?

Removing it will mean decrypting people's data and posting a useralert.
It slightly speeds up the schedule for purge-db4o.

Finally, I will be leaving soon. I will answer emails occasionally, but I am unlikely to have time to contribute much to Freenet in term time. It is likely that I will be back next summer. Being able to work consistently on a single project (sorting out the client layer) and have other people deal with routine maintenance and releasing (thanks Steve!), has been a good thing. As has helping out with the link length changes, which others did most of the work on, and seems to have been a major success. I have also pushed a couple of proposed changes to load management. These are small and will need testing but might have a big impact. ArneBab's recent simulations confirm quantitatively that reducing the proportion of rejects could have a big impact on effective routing.

2014/09/03

Please try the new unofficial Freenet snapshot!

For most of the summer, and a fair amount of time last year, I've been working on fixing one of Freenet's most complicated architectural bugs. This concerns the client layer, the part of Freenet that deals with putting together files and freesites from blocks (as opposed to actually fetching blocks), and in particular the code supporting persistent downloads and uploads (long-term downloads on the Downloads page for example). The current client layer does far more disk I/O than it needs to, leaks information and space in node.db4o, and above all it is unreliable: Many users report losing their download queue regularly, especially on Java 1.7.

The old code stores the requests in a database, and stores blocks from the downloads in a big file called persistent-temp.blob. This has severe robustness problems, as if one byte is lost in the database file (which is huge), there is a tendency for the entire database to be lost, with the result that Freenet either fails to start up or loses all your downloads. And it also results in far more disk seeks, and fsync's, than are necessary, because the largest, most important data is actually very simple: For a big file download, we need to track which blocks we have downloaded, and where we've put them. The corresponding database structures are complicated and there is lots of overhead, and lots of unnecessary seeking related to transactions on structures that are actually simple, robust, and easily recovered from the other data stored. And finally, the blocks aren't stored together on disk, and we have to copy the data when we finish the download, even if it doesn't need to be decompressed or filtered.

Hence the new code keeps all the downloaded blocks, and the status information for each block etc, together in a single temporary file, which will be slightly larger than the actual download. The overall list of which files we are downloading and where to, as well as complicated but small details related to e.g. downloading parts of freesites, is kept in a new file "client.dat", using Java's built-in serialization, which is much smaller than the old "node.db4o".

The result is it should be much more reliable (corruption will only lose you one download at worst, but mostly we should be able to recover from it at least for downloads), cause much less disk I/O and thus slow down your computer, and wear out your hard disk, much less than the current code, and it also appears to be noticeably faster in some cases (but don't expect it to affect network performance). Note that inserts are slightly less robust than downloads, but then they should be over within a couple of weeks or they won't be much use.

The current prototype works for downloads, uploads and site uploads. It does not automatically migrate your old downloads, and it does clean up persistent-temp automatically for you, which means if you just start it without making a backup you will lose data. If you want to keep your download and upload queues you need to backup node.db4o* and persistent-temp* first.

Also, it doesn't support disk crypto ("physical security level" in freenet configuration). There is code to do this (part of a larger Summer of Code project by another student on simplifying our crypto) but it hasn't been fully reviewed and integrated yet. Whether we should actually provide password protection for the client cache and the downloads in Freenet is unclear; it's true that it provides some protection to users who install Freenet casually and then find a sensitive file they want to download (they can set a password at that point), but it's also true that remembering a strong password is pretty onerous, and the same people who are willing to do that are probably also willing to install full disk encryption. So we may get rid of this dubious feature.

Prototype new client layer: please test!

Now the important part ... the working prototype download! Download the jar file, shut down Freenet, backup node.db4o* and persistent-temp* and replace your freenet.jar (or freenet.jar.new, check wrapper.conf) with the jar, then start it back up. Your download/upload queues will be empty, so download / upload some files / sites, try stuff out, and let me know how it goes! I read the Freenet board on FMS, amongst other ways of contacting me ...

Prototype new client layer jar file (signature) and source code (signature) from tag purge-db4o-snapshot-1 (git ID 85e058d530eefb3e4d7d4e93bc6c8bff6c9806aa).

What else is going on?

Another important development this summer in Freenet: A paper (freenet) explaining that opennet has real routing problems, with a solution. This is something that we've suspected for a while, partly because nodes that download a lot tend to have very widely spread out peer locations, while for good routing we need most of our connections to be close to us in the keyspace (every node has a "location" between 0.0 and 1.0; we route requests for keys/blocks to the node nearest the key, converted to a location). We have deployed a slightly crude fix, forcing 70% of our opennet connections to be "near" links (less than 0.01 distance away), but the results so far show a substantial performance gain.

I actually implemented the code, but ArneBab and operhiem1 were instrumental in it happening. You can see more about it on ArneBab's site. He has also recently published this gem, and has been working on fundraising. Steve (operhiem1) has been handling day to day code review and releases (which has been a huge help, allowing me to focus on sorting out the client layer), and p0s continues to improve WoT, with a couple of big breakthroughs well on the way.

Things are looking interesting for Freenet ... just as I have to go back to uni in a month. I may be back next summer, we'll see how it works out; I might get an internship, but most of them barely pay the rent, and I do have other overheads. I'm doing well at university, in spite of numerous minor difficulties, and looking forward to (or is that trying not to think about?) next year, but thankfully first I will take a brief holiday. One interesting aspect of this was that in the first year of Cambridge Computer Science you study 50% CS, 25% maths (same as Natural Sciences maths) and 25% Physics (or a few other options). Physics is an interesting and important subject, and I did well at A-level; I seriously considered transferring, but concluded eventually that a career in Physics probably doesn't make sense. This consumed a great deal of my time and energy earlier in the summer. University is worthwhile because there are some things you just have to study intensively, you can't easily pick them up on the job; and it opens up some new career options. And as universities go, Cambridge is of course brilliant (and therefore also terrible!). I don't know where I will eventually go career-wise... I could come back to Freenet full time, but only if we have enough funded people and volunteers to make a serious dent in the numerous not-quite-intractable problems (see ArneBab's proposals). Academia is a possibility. Some industry jobs may be interesting, in spite of the tendency to ageism, London-centrism and the all-devouring financial cancer. We'll see.

Meanwhile the world tears itself apart and many of the really big problems are ignored as enthusiastically as ever, with the tabloids publishing yet more climate denial nonsense. Islamist scum using Diaspora but so far have ignored us ... this is good. Hoping we don't get obviously involved in the distributed online marketplaces thing any time soon (we can't deal with that sort of attention just yet IMHO).

It looks like the government apparently was right when it said the DRIP act doesn't give it any new surveillance powers - it was, probably illegally, interpreting the old powers to give it most of what the (blocked) Snooper's Charter supposedly contained anyway, and it needed the DRIP act to make this legal. So the UK government has powers to demand intercepts from web services (even if it means architectural changes) and presumably from peer to peer networks too, and to pay foreign services for their co-operation. As do the americans, at least in the web services case. Thankfully Freenet is open source, so there's a limit to what they can practically do to us; "secret" backdoors don't last long, or at least they need to be very subtle (think Heartbleed). The fact of the matter is right now Freenet is too small to be worth attacking directly, and in opennet mode, what security it provides is easily circumvented at the three-letter-agency level (and it may be vulnerable to much less well funded adversaries too). But we can change this. Darknet allows for some very interesting possibilities.

In the unlikely event of Scotland voting yes, I'm going to have to find out exactly what will be involved in emigrating, but then as we'd be out of the EU (and ECHR, which even Russia is part of!), it would all change anyway ... Government policy proposals that are so far out of the legal norms that they must be pure political posturing ... and two potentially world-reshaping wars ... Life goes on ... for most ...

The key is that the bad guys don't always win; life is more complicated than that. For example the network neutrality directive as passed by the European Parliament was surprisingly progressive (though it won't save Freenet from arbitrary blocking); this means it'll either be rewritten into the usual weasel-words, or it'll be dropped. See the software patents stalemate in 2005 for an example of the flawed, but at least somewhat democratic EU legislative process. When the powers that be don't get what they want that way, they find alternative routes; so software patents are now (or will be soon) legal here, not because of an EU directive (they tried that, and lost, in 2005), but because of an international treaty. Similarly TTIP threatens to massively reduce the range of possible government policies by instigating a parallel court system so corporations can sue governments for environmental or health regulations that threaten their profit margins, as well as preventing us from undoing pointless and inefficient privatisations and "harmonising" food safety down to the lowest common denominator. But this can be stopped. ACTA was stopped. The enemy is not omnipotent; there is a political or economic cost associated with any action they take. You just have to try...

2013/08/03

So much to do, so little time...

It's been a long time since I've updated my flog. Part of the reason for this update is just to post my new Freemail 0.2 address.

I've had more time for Freenet since I finished my Physics course last August, although Maths has still been a big demand on my time until June this year. I am confident that I got the A I need to get into Cambridge (but I'll find out soon), so will be starting university in October, and therefore stopping paid work on Freenet. I may work on Freenet in a paid full-time capacity next summer, but I'm not ruling out doing something else. Which means that the Freenet side-conference in Berlin (probably but not certainly part of CTS IV) will mark the end of my current role on Freenet, at least for some time.

So firstly, what's happened since the last update?

  • Financial situation: This is a bit less hopeless than we had feared a year ago, partly due to the rise of Bitcoin, partly due to my not working full time on Freenet for much of that period due to academic commitments, and only needing to be paid for a limited period in the remainder of this year, and partly because of the political climate (more below).
  • New people, and new paid staff!: Xor (p0s) is a paid member of staff, working mainly on solving the remaining problems with Web of Trust. We have 5 Summer of Code students this year, working on a smartphone app for exchanging darknet references, Infocalypse (improved Mercurial over Freenet, but hopefully Git too eventually), searching for filesharing purposes (see below), and improving the performance of connections between Freenet nodes, especially on wireless networks. And we have had a number of new volunteers, although mostly on apps and plugins and installers and translations. See below for the situation with the core.
  • Major changes to the auto-updater: The Freenet auto-update system now supports deploying individual libraries. Meaning we have 99% of what Ximin proposed with "splitting up freenet-ext.jar", at least in principle. So far we've used this to deploy the Bouncycastle crypto library.
  • Connection setup encryption code: Thanks to nextgens and Eleriseth, we now use ECC crypto for setting up connections. This is faster, more secure and uses fewer bytes (or it will in 6 months when we can get rid of the old keys). This is harder than it sounds, and there is more work to do, notably we want to change the packet-level encryption.
  • Progress on searching: We have a new, anonymous volunteer to run a Spider index, and I've done various bugfixes; searching for freesites should be usable again soon. Also, leuchtkaefer's Summer of Code project on filesharing, mentioned above.
  • Freemail v0.2, and merging/deploying code generally: Freemail v0.2 has been deployed. Unfortunately a lot of code related to Freenet has taken a ridiculously long time to review, merge and/or deploy, when it hasn't coincided with the immediate priorities. Freemail is important IMHO, providing strong privacy for email over Freenet, including making it very difficult to obtain metadata. There is a fair bit of other code that needs merging too; I've made a start but there is more to deal with.
  • Opennet: The most visible change is opennet nodes (that is, everyone who isn't darknet i.e. pure friend-to-friend) can connect to up to 100 peers, depending on bandwidth. Of course, many Japanese users use customised builds which increase this to 500 peers. It is unclear whether this is causing requests to be rejected on nodes with fat connections because of running out of threads; we need to investigate this soon. Also, my computer runs regular tests on opennet, to feed into digger3's stats, but unfortunately these fail rather frequently, probably because of IP-address level DoS protection on opennet. So I am unable to get a clear picture of whether opennet bootstrapping is working well; others say it is. This will need fixing soon.
  • Lots of smaller stuff: Minor client layer changes particularly related to MIME types and site inserts, MathML support in the filter (and other fixes), new versions of various plugins, web interface improvements, lots of minor bug fixes, refactoring and cleanups, some nice optimisations, small improvements to opennet bootstrapping performance, reduced disk I/O, better IPv6 support, more stable connections on slow CPUs (especially if multi-core), and of course lots of translation updates from our various volunteers, and new default bookmarks (notably Enzo's Index).

Of course, this is not as much as I'd hoped we'd achieve in the period. Part of the problem is a lack of core developers. Who is going to maintain Freenet itself ("fred")? I'm hoping to get one or two people doing releases; the scripts have been mostly tidied up and we're ready to go. But it's still true that the majority of development on fred is by me, and much of the code is poorly documented, often over-complex, and hard to get into. And there are nowhere near enough unit tests. One priority is fixing the client layer database, see below.

Also, I've been working on a lot of "miscellaneous" stuff - a huge backlog of minor but important bugs, stuff to merge, stuff to move to the bug tracker, and so on, combined with frequently being distracted to help people with their projects (which is usually highly productive, don't get me wrong!). This means that the big stuff I mention below hasn't moved as fast as I'd hoped. But I will implement at least some of the big changes before I get to uni.

So what are people working on other than the stuff already mentioned? Two major projects:

  • Windows: romnGit has been writing a new Windows installer and helping implement the code needed to update the wrapper binary on Windows (note that this is pretty easy on Linux/Mac). Both of these are desperately needed to reduce conflicts with anti-virus software (the current installer is based on AutoHotKey so it is often falsely flagged as malware, and there are often problems with the wrapper too), but there is more to do; we need a proper code signing certificate, we need to use the AV vendors' whitelisting programmes and so on.
  • New web interface: makdisse (_mak) has been working on mockups of a new web interface for some time, and is also interested in implementing it. operhiem1 has been helping him. This promises to improve Freenet's usability and attractiveness significantly.

What are the larger problems?

  • Can opennet be secured?: Over the last year, and in fact over several years, the question "Can we secure opennet?" has come up repeatedly. After running around some very large circles, I come back to where I started: Only on darknet can we have reasonable security. Mostly it boils down to Sybil attacks: On opennet, an attacker can create any number of nodes fairly cheaply, can connect to every node, or can easily move to a specific targeted location on the network. Plus they can find nodes fairly easily. This allows for some fairly cheap and powerful attacks to trace the authors of sensitive content. This is of course much easier if the target is doing something foolish, such as reinserting a known file as a CHK; but similar attacks are possible, but slower, on chat forums or freesite updates, even if people are careful. But on darknet, every connection requires either social engineering or compromising somebody's computer. There are ways we can improve the security of opennet, and we've discussed some of these. Unfortunately everything that makes costs higher for the attacker (apart from darknet) also makes costs higher for regular users, especially if they have slow computers (hashcash), poor eyesight (CAPTCHAs) etc. Scarcity of IP addresses doesn't seem to work well either; enforcing one node and 100 connections per IP would be complex and IP addresses probably aren't expensive enough to obtain for it to be a major deterrent. The ultimate option is to require a donation to set up a full Freenet node, but this again would likely put off a lot of users.
  • What about darknet?: On a large global darknet, Freenet should provide an interesting level of security. It would be very difficult to block, especially with transport plugins, but not of course impossible: There's not much hope in places like Iran that really don't care about the cost or economic impact of filtering. It is totally decentralised, without any "seed nodes" or bootstrapping points to DoS or block. And with the PISCES paper, we can set up onion-like tunnels with fairly impressive security (unfortunately, onion routing on opennet-style DHTs does not provide strong security). The big question is, can we build a global darknet? This will partly depend on making darknet (friend to friend freenet connections) easier and faster, and there are a lot of things we can do to make that happen, and also on making the network as a whole bigger and achieving viral growth. But above all, it's a techno-social question: Will enough people run Freenet that they often know other Freenet users? However, there is much we can do to make darknet work better. PS we have a solution to the Pitch Black attack but I'm still waiting for thesnark to confirm it with actual numbers and graphs from his simulations!
  • The client layer database: IMHO the best solution to the problems we have with both database corruption and disk I/O related to persistent downloads and uploads is to remove the db4o database, and especially the persistent-temp.blob mechanism. I have a plan for how to implement this, and will start work on it soon. This will also have the pleasant side effect of solving some of the problems with big freesite uploads.
  • Maven: Some of the pending code (the ATOM, RSS and SVG filters, and the JNA-based code for reducing our performance footprint on Windows) is built with Maven. Unfortunately Maven's default configuration doesn't meet our security requirements: It is possible to configure it to verify signatures on binaries, but how do we know those binaries had the same check? Granted if a package's official binary is compromised it will affect a lot of other people, but we need to solve this somehow, and it looks like being problematic.
  • Web of Trust, Sone, Freetalk and PSKs: The Web of Trust plugin is slow, causes enormous amounts of disk access, and apparently uses a lot of memory. Xor is working on this, and it should be possible to make fairly large improvements in the not too distant future. Sone currently has severe scalability problems, which need to be fixed before it can be made official. Freetalk is currently broken, and we no longer recommend it on the web interface. Which means for now we use FMS, which unfortunately can't be easily bundled with Freenet. All of these tools (and several others) will have severe scalability problems if we get more users. However, it should be possible to achieve acceptable performance for chat forum style applications (even for real time chat on some assumptions), for fairly large networks, provided that we poll mainly users that use the same forums we do, that we have some means of announcing within a single forum, preferably in a way that will be visible to everyone on that forum and not just to one user (this will reduce the need for WoT trust ratings), and that we implement a new key type called a Programmable Subspace Key. This is a special kind of SSK which allows for programmable verification rules; in particular, it allows for many users to share a single PSK queue (like a series of SSKs), while still having protection if one of them starts spamming. Also, using ECC encrption for SSKs, and some datastore changes, will allow us to have several different SSK/PSK sizes: An ~ 800 byte SSK, which fits in a single packet including its public key, perfect for real time chat; a ~ 2KB SSK, for most forum applications etc, and a 32KB SSK for cases where this is useful. Unfortunately all this will be a significant amount of work, and much of it in the core; we don't have the person-power at the moment... Also, currently all spam-proof forums depend on CAPTCHAs for announcing new identities; this is hopeless in the long run, and a determined attacker could break WoT relatively cheaply. For the same reasons given above when discussing opennet, e.g. hashcash won't solve the problem; but there are some preliminary ideas for how to solve this on darknet (Scarce SSKs, limited per unit time by the number of social connections you have).
  • Recently Failed errors: We have been getting RecentlyFailed errors much too early, to the point where content that should be reachable may not be. I think I have a solution to this, but it needs testing / it needs to be deployed without too many other network level changes so we can see what happens.
  • Load management and probe requests: I'm fairly sure that some variant on New Load Management will work. There are some important simplifications to load limiting that need to happen before trying again, notably to how we divide load between peers and how we divide it between SSKs and CHKs. Ian's idea for sending a slow-down signal before we actually start rejecting requests is also very important, and is closely related to some mechanisms used on routing IP traffic, but again we need the load limiting simplifications first. A medium term goal is to reduce the number of nodes reporting rejecting a large proportion of requests; for example, my 40KB/sec node used to have 40%+ rejects; I've increased it to 75KB/sec and it's not much better.

Much of this has been discussed in various places, and there have been various roadmaps posted, for example here (a bit strong on the IP-scarcity opennet changes that IMHO are dubious), here (beware of vandalism on both these links) and here. Feel free to discuss on FMS, or Freemail/email (with GPG) me if you want to talk in private. There are of course thousands of tasks, large and small, on the bug tracker. Seriously, we need more volunteer developers, and better now than later as I'll still be around to help you if you get stuck.

Of course there are lots of other things I'd like to see on Freenet: Various ideas to further improve performance (such as Bloom filter sharing), many of which have been bouncing around for years; filters for more content types, or a distro/VM/package that solves the same problem in a different way; lots and lots and lots of bugfixes; "eating our own dogfood" and secure release channels; and so on. There has been some discussion of using Bitcoin bounties to not only fund but also pay for development; if handled correctly this might make our development process more robust as well as allowing us to hire people who aren't always long-term contributors. However, there are some major drawbacks too, so we'll see where it goes. But IMHO we have a good handle on many of the key, fundamental problems in Freenet, but insufficient resources with which to implement them. Drop us a few million dollars and we could make an interesting Freenet 1.0! I'd be prepared to put off Cambridge for a year to help set things up, though I do intend to move on eventually (but moving on when so many key things are still unresolved is very unsatisfactory, and I do, still, passionately believe in Freenet, in spite of valuing many other things too)... Of course this is only half serious; I don't think I could manage 10 paid devs, Ian is more interested in Tahrir lately (partly because of repeated disagreements with me!), and an outsider would likely spend $2M on a new web interface and a Facebook plugin and declare job done! Or I could come back after uni - but I won't come back as paid staff unless we have a lot more resources (volunteer developers or paid ones), although I intend to work for Freenet over some of the summer vacations while I'm at uni.

The political climate is interesting. On the one hand, Snowden and apparently other people have confirmed what us paranoid privacy geeks and conspiracy theorists have suspected for years: that pretty much everything you do online is monitored and analysed in bulk, by various governments as well as by those selling advertising. "Economic wellbeing" is a well-established justification for intelligence - is this stuff used for inter-corporate espionage? Probably! On the other hand, people are openly talking about blocking darknets, and there is a good deal of interest in censoring even legal content on the web, social networks and so on. Both trends could play into our hands - provided that we don't see a really heavy-handed clampdown on peer to peer. This seems unlikely as it has generally failed so far, at least to prevent copyright infringement, and as p2p may yet be strategically significant for business with WebRTC. Unfortunately, to raise significant resources from this situation, we need something new and shiny. Most people who might care about Freenet are already disillusioned with it. Also, Kickstarter works best for commercial projects, and explicitly forbids social networks (Freenet might be seen as one); Freenet cannot be a commercial project AFAICS, because trust requires open source code, and long term persistence requires that the network not be dependant on its original authors staying around.

Now for some more politics: Some time ago somebody sent me some stuff on Zeitgeist. Post scarcity economics, transhumanism and all the rest, are fascinating, and in the long run probably have some important truths. The catch is, in the short to medium term, scarcity is going to increase, and not just by a little either; food prices, climate change, economic collapse or stagnation, increasing instability and so on, there are plenty of storm clouds on the horizon. This is one of the reasons I work on Freenet: We need safeguards, even new ones that go beyond the traditional democratic safeguards, because as we've seen before, all checks and balances are fallible.

I'm not a traditional libertarian or ancap; most of them are stuck in ancient battles which may be obsoleted by events that aren't that far away, and I believe that any system that doesn't limit poverty is going to lead to slavery and oppression, and furthermore environmental problems (especially global ones) can be difficult to deal with without a strong state. However, I'm not entirely a statist socialist either; some problems can be solved best by the state, some can't. I do think that "new" digital freedoms, which are translations of the old freedoms into new contexts and new possibilities, are something we need to seriously consider: Should there be an absolute right to exchange data of all kinds privately, and publish it anonymously? Freenet goes some way towards that, and while any sufficiently motivated state could crush Freenet and all other p2p, it would require political capital, and with WebRTC coming up probably some economic impact too. Should there be a right to be able to buy unrestricted 3d printers and other programmable fabrication tools, as long as you don't use them to hurt other people? Enforcing the old world order means preventing people from buying general purpose tools, and this may be both impractical and oppressive: If you can buy a 3d printer, should it be capable of printing whatever you design or download within its practical limitations, including items that violate intellectual property, firearms, drugs, etc, or should it only be able to create oligopoly-approved content downloaded from an official app store? That is one of many questions we will need to grapple with in the near future. It is analogous to the questions of purely virtual intellectual property: I don't support copyright infringement, though I'm skeptical about patents (especially software patents), but I object to limiting my freedom to use my own computing resources as I see fit, which are an extension of my mind, just to prevent me from violating copyrights. Of course some of these "digital rights" (a concept that is not fully formed in my mind at least) have technical aspects, and can survive as a safeguard provided they have a sufficiently large community, and provided the opposition isn't too fierce. We'll see where it goes!

I have some sympathy for Extropism, though I'm not convinced about abolition of money. In any case, it's going to be quite some time before we can talk meaningfully about post-scarcity anything. Of course the future is impossible to predict, mainly because while you can extrapolate existing trends, you can never tell what the order of events will be, what will happen before what, what will turn out to be really hard, what could work but just doesn't happen because of lack of opportunity, vision, or business sense, and so on. What was true for the last century wasn't necessarily true for the one before or for this one or the one after; assumptions we take for granted about life, politics, economics and human nature sometimes turn out to be completely wrong, or just obsolete. My relatively poor social skills, lack of photogenicity and record of working on Freenet are not the only reasons why I work in technology and not politics; technology can often make a bigger difference to people's lives. Which is part of the reason why I'm off to university in October: To learn to make things better (and exactly why and how) from one of the best universities in the world, to find out whether academia is for me, and to allow me a good chance of getting a job doing something really useful and interesting (like I have for Freenet, but hopefully with more resources!), rather than being limited to the handful of jobs that don't need a degree and don't reject my eccentric experience.

Happy Freeneting!

2012/10/24

Noise

My last post referred, in a somewhat panicky way, to a new law currently being debated in the UK which, as I saw it at the time, might have been used to compel Freenet's release managers (notably me) to release builds with backdoors for tracking. The details were unclear, but the intent was for law enforcement, intelligence agencies etc to have access to logs of who you communicate with - even if it's through an online game, social networking site, peer to peer network etc. I had feared that this might lead to me being forced to implement backdoors in Freenet. This has happened in other european countries. However, it now appears that the powers will simply require ISPs (apparently only the biggest 6 who account for 94% of UK internet users) to surveil or block p2p, SSL-encrypted social networks etc. This doesn't mean they will block Facebook and thus provoke a mass exodus; there would need to be some voluntary involvement by the social networks, to avoid being blocked, but we know already that some of them have been scanning their users' posts for references to criminality and passing it on. However, Freenet certainly would not cooperate. Which means if the legislation passes in its current form, Freenet may be blocked for 94% of the UK. It does not mean I will be forced to put in a back door, which I certainly will not do under any circumstances. I apologise for not digging deeper last time.

However, the key point is this: Ideally, security should not depend on one person's trustworthiness. There are a number of things we can do to ensure that if a release manager's computer is compromised, bad guys can't release backdoored builds. We should do some of them, and my comments were in part a plea for help. These issues are likely to be common to any open source project with limited resources that needs to publish auto-updates... They are an interesting problem, and one that certainly must be solved eventually, although there is no need for immediate panic given the above.

Right now, any of the release managers for Freenet can deploy a build, and all the nodes with auto-update enabled will update themselves fairly quickly. This is a good thing: It allows us to fix bugs quickly, to change the network protocol to try to improve performance and measure the effect of changes. This is why there are "mandatory" builds occasionally (which won't talk to older versions). However, it means in theory corrupted builds could be distributed.

IMHO there are two components to any meaningful safeguard:

  1. Some way to ensure the binary update corresponds to the published source code.
  2. Some way to build trust in the published source code.

Both of these are important. A partial solution to the first is provided by ensuring that the source from github repository and on Freenet corresponds to the binaries released in the update. This is the purpose of the "verify-build" script in the Maintenance scripts repository. At the moment this can verify the published jar on the web against the published source code on the web and the git repository - it doesn't yet fetch the jar and source from Freenet. operhiem1 has been very helpful, and finishing it shouldn't be a big job, Ideally we would like several well-known anonymous FMS posters to run the script, as well as developers and other traceable people.

The script requires linux, and the exact same version of Java as the person doing the release (not always me) used to build it. This suggests it should be used in a virtual machine. Integrating this into the update process (creating a disk image and then calling the VM with a platform-specific command) for those users who don't mind the extra overhead (and maintenance) would be interesting.

Or we could use it as part of a "honey trap": One problem with just using the script is that once an update has happened, the corrupted nodes may send you a "clean" binary. But they will have to send the corrupted binary out at some point, and provided the attacker doesn't know which nodes are "honey traps", this could be effective: the honey-trap could detect, and prove, that there has been a bogus build distributed.

Even better would be for a release to require not only the signature from the release manager, but also from some automated signature services run by third parties. These would run the same script, and certify that 1) the binary corresponds to the source code, 2) the source code published on Freenet corresponds to that published on github, and 3) there has only been one such signature for any single build (the sigs would be published). A requirement for multiple signatures could be implemented using the proposed PSK key type, but since the release manager isn't likely to be a spammer (he has better things to do even if he's malicious), we could just use a separate key-stream for the signatures.

This brings us to the second part: How do we establish that the code is "clean"? In the first place, publishing it means that there is at least the potential for anyone to read it and find bugs, and shout about them, if necessary via means other than Freenet. So it is certainly not useless to be able to verify the binary. But ideally we'd like more than one person to have explicitly reviewed the source and signed it before release. However, requiring a second developer to read the diffs and sign them just before a release is rather onerous. Normally a release takes some time, so it would be better if the snapshots (or individual commits) could be signed by those developers who are keeping track of the code whenever is convenient to them, using the tools Git provides already, or using some Freenet-specific tool. Testers could also sign snapshots - not guaranteeing that they are not malicious, but at least providing a basic sanity check. It is likely that we will need to release builds quickly on occasion (e.g. to fix an urgent bug), at least for some time to come, although we could treat them differently; but this is an interesting direction in the long run.

There have been fairly detailed discussions on this on FMS, I recommend reading the threads there; some of it is little better than flaming but much of it is interesting and relevant.

Status update

A new build will be released shortly, which includes major changes to the auto-updater system. These are not related to the above, but will allow us to include separate files and not just update freenet.jar and freenet-ext.jar. For example, we will be shipping the Bouncycastle crypto library, which we will soon use to upgrade some of our encryption code, to be faster, smaller (in bytes) and more secure. Thanks to Nextgens (Florent) for his work on the latter and prodding on the former! This will also enable us to finally implement Infinity0's (Ximin's) ambition to split up freenet-ext.jar.

Another big project on the horizon is a new keytype, Programmable Signed Keys (PSKs). This has the potential to greatly improve performance of forums and lots of similar applications over Freenet. I won't go into detail just yet. I'm hoping that Sadao, who originally proposed a simpler version of the key type (where you can post if your identity is signed by the key's pubkey), will use this to implement a new messaging system. Also, this is of course similar to a proposal by one of the original developers around 0.4 or even earlier, which wasn't acted on for simplicity's sake (it's also inspired by bitcoin's workings); the complexity of messaging systems built on Freenet means its time has come. Making chat, forums, social networking / microblogging, even some forms of search, scale over Freenet is one of the 7 or so "impossible things" that Freenet still needs to do to make serious progress. I now have a fair idea how to solve most of the others too. :)

And of course, there are various small but annoying admin-like issues to deal with, such as how to automatically upgrade the wrapper (especially on Windows), updating the windows installer, and so on. There is lots of code to merge. There are the ever present client layer bugs and db4o corruption problems, various ideas for how to reduce disk I/O, how to safely gather data to understand why Freenet's data persistence isn't what we might hope it to be, and a whole stack of major security improvements which should make tracing content uploaders very difficult even on opennet. And then there's the stuff that's been in the queue for longer, for example making darknet easier, and sorting out load management. So much to do, so little time, but I really think we are poised to make some serious progress, including on the "impossible" issues or "grand challenges", such as scalable-enough chat, fixing the Pitch Black attack on darknet, beating Mobile Attacker Source Tracing, substantially improving data persistence, and so on...

Meanwhile I have interviews for university in the near future. I'm having to spend a significant amount of time on that, mostly sorting out my maths (I have one more A2 to do next year). I have one interview booked so far, no word from the others yet. But I'm also managing to make some significant progress on Freenet. Thanks to all our volunteers!

2012/09/11

How safe is Freenet anyway?

Obligatory toad picture

There has been a lot of discussion of security lately, initially because of these academics (Dong, Baumeister, Duan et al). They published two attacks on Freenet, and then appear to have got funding for an effort to unmask Freenet users, as an "Experimental Study of Accountability in Existing Anonymous Networks". Their goals are apparently as political as ours, but in the opposite direction; read the Project Summary. Which is fine by me, see the end of this post! Anyway, I'm absolutely delighted that somebody is doing serious research on Freenet's security, thanks guys!

Their first paper, "A Traceback Attack on Freenet", concerns a genuinely new attack based on request UIDs. It was possible to trace a single request back to its originator, and this seems to go back some years; the conditions required are 1) that Freenet gives a distinct response, RejectedLoop, when a request UID has already passed a node, 2) that it remembers the UIDs of prior completed requests, and 3) that you can somehow probe a node for a UID without the request being routed further. We have now eliminated #2, since it isn't necessary; this eliminates 99% of the threat from that attack, because the only way to do the attack is if you can connect to nodes behind the bit that reached you before the request finishes. This might occasionally be possible for inserts, but we will improve it further; in any case it's likely that other known attacks are more powerful than this now.

We have planned further improvements, and a more radical solution that might have an impact on routing performance on darknet. Their paper suggests that we use a single indistinguishable failure mode; we do need to look at this but probably can't reduce it that far due to the need to distinguish between fatal failures and non-fatal ones, limit the total hops visited, and identify overloaded nodes.

Sadly I haven't had time to read their second paper yet. Nextgens tells me it's largely a confirmation of what we already know: An attacker can take over your routing table on opennet. This is unfortunate, but difficult to prevent, and is one of many good reasons why in the long run we need a large darknet. This will require improvements to both performance and especially usability, which are still planned. It will also require solving the Pitch Black attack. We have a solution, which appears to work, but the person working on it hasn't got back to us yet. If he doesn't we may need to just implement our own simulations eventually to prove it works.

I have often said publicly that opennet sucks. This is true, in that its security is much lower than is achievable on darknet. I have also said that opennet's security is hopeless. While Freenet is still experimental, so you probably shouldn't rely on it, this is perhaps a slight exaggeration. I explained a bit on FMS recently so will post details here too.

Freenet's threat model (very basic version!)

We assume that the attacker is initially distant. That is, he knows that of an anonymous identity (a chat identity or the author of a freesite etc) which he wants to trace back to its real-world identity. We want to make this task as difficult as possible.

In other words, Freenet is about hiding in the crowd. It's not much use if you are already on their shortlist of suspects. In the more unpleasant regimes, and even in the west quite often, you have bigger problems at this point (e.g. dawn raids!). In particular, Freenet does not provide strong protection if you are already connected to the target: Provided you can recognise the content (possibly after the fact from logs), you can identify with reasonable confidence that they are the originator. One consequence of this is you need a reasonably large network to hide in, and if the bad guy can afford to connect to everyone (or in some cases exclude a group of nodes at a time) he can probably trace you.

We do not provide the sort of anonymity guarantees that mixnets such as Tor can in theory provide. In practice a scalable mixnet is a difficult problem for similar reasons to why opennet on Freenet is hard, e.g. route selection attacks. But it's a different, and difficult, problem. We do plan eventually to provide stronger protection against nearby attackers, using some form of tunneling (classic mixnet sadly doesn't work on darknet). However, in the near future we have bigger fish to fry, as I will explain below.

The other part of the Freenet threat model is blocking: We want to make it hard to block Freenet on, for example, a national firewall. Again, blocking opennet is straightforward, because you can simply harvest the nodes and block them. We try to make it as difficult as possible to identify the Freenet protocol, and one of our Summer of Code students this year has been working on transport plugins, which will enable us to disguise Freenet traffic as VoIP, HTTP, etc.

Of course, traffic flow analysis can identify any peer to peer network, including Freenet, even if we have very good steganography. The brute force approach of blocking all packets between two "consumer" IP addresses (similar to many of the spam RBLs) will also crush any fully decentralised network. But this is likely to be somewhat expensive and have some collateral damage; provided that VoIP remains important, and largely decentralised in its actual traffic flows, hopefully this won't happen soon, outside of places like Iran.

Obviously traffic analysis can be used to try to identify what is going on on Freenet, combined with malicious nodes, and so on. This is a problem for Tor too. Freenet is designed to have higher latency than Tor, mostly used for big downloads, which is a slight advantage, and there are a lot of things we can do in future. I'm not going to make any promises about traffic analysis here, and no doubt there are other classes of attacks.

The bottom line is use darknet. No, really, use darknet! It's vastly more secure because it's virtually impossible for the attacker to connect to everyone, and very expensive for him even to connect to one more node: he has to either hack your computer or social engineer you. Which isn't necessarily all that expensive - it's just that compared to opennet, where you can just announce to a location, it's very expensive.

Also, for reasons explained below, for now, we only provide strong protection for original content uploaders (and that only if they insert to a random key i.e. SSK@). If you download content, or reinsert it, you are potentially vulnerable. However, we will improve on this; see the next section.

Mobile attacker source tracing and how to fix it

Unfortunately, it's actually slightly worse than the above. One attack we have been trying to deal with for some time is called "mobile attacker source tracing". This potentially enables an attacker to trace content authors without having to connect to a lot of nodes at once. The basic principle is if you can identify requests, you listen for them, and work out where on the network the originator couldn't have been (based on routing). You correlate this over many requests, and then announce to a location closer to where you think he might be. If you get it right, you will see more requests, enabling you to approach him rapidly.

This has been documented on the FAQ and the wiki for some time, so it's not news. And it is not as bad as it sounds, even on opennet: It relies on being able to identify the keys before the (large) insert/request finishes. Which means it only applies to 1) downloads of files known publicly and 2) uploads to predictable keys. In other words, if you always insert as SSK@, your uploads can't be traced - the attacker can only use your chat posts, the block at the top of your freesite uploads, etc. It ought to be possible for original content authors to remain reasonably safe, although they will have to completely change their identity from time to time, and high frequency posting makes you more vulnerable. Sadly we don't yet know how to quantify this. And downloaders are vulnerable, as is anyone who reinserts a file to a known key. But we definitely provide some protection for original content uploaders, provided that re-inserts are done by other people.

After the recent discussion of the new attacks, we (mainly me and Evan) had a long brainstorming session concerning attacks. IMHO we can improve the situation dramatically. It's a combination of two ideas, and it can potentially protect both inserters and requesters.

First, we can greatly slow down the attack by ensuring that high HTL requests are not routed to new connections (say peers that have connected in the last hour). This will have some impact on routing, but we can make the time limit reduce rapidly with HTL, as it is less powerful the further away you are from the originator. Also, we will need to tweak the time limits based on our peers' uptimes. There are of course targeted attacks against this, but it's still an improvement. The result is an attacker cannot move towards the target quickly: On each new node it connects to, it will have to wait a long time before it can capture more data. This makes small, quick requests much safer, and even for larger requests it means the attacker will need more time or more resources or both.

Secondly, routing is predictable, and this is the whole basis of the source tracing attack. We already treat the first few hops after a request starts specially: Above HTL 16, we don't cache requests, and above HTL 15, we don't cache inserts. This is to prevent datastore probing, both for what you have requested directly (a targeted attack, arguably outside our threat model), and to try to trace a download or insert after the fact (a more serious threat). The initial, maximum HTL is 18, and we spend an average of 2 hops (randomly) at 18 (we then decrement each hop down to 2).

So the proposal is, when we are in "high HTL", simply route randomly. This greatly blurs the original location of the requestor, because the request doesn't start until we are several hops away from it. Assuming the HTL thresholds are correct, the attacker will probably now need to rely on a completely different, and more expensive, attack: Try to intercept requests before they end the random routing/high HTL stage. And because they are routed to a random node at each hop, they have much less information.

For inserts, this is actually extremely cheap. We reach the "ideal" node by around HTL 16 anyway, so there is plenty of space left. And we're not visiting any more nodes. So we might reach a few less nodes around the target, but we'll still visit the important ones. We might lose a bit of performance depending on how we deal with backed off peers in our random routing. Plus it may help us to escape "rabbit holes" etc.

However, for requests, this may cause a significant increase in latency, since the extra hops are at the beginning. But on the other hand, the fact that we already don't cache until HTL is below this point, may mean we are currently caching a lot less than we need to, since we will often reach the "ideal" node before we exit the "high HTL" no-caching mode. Also, if we do this we won't path fold while we are routing randomly (that would be silly). There is a popular theory that a lot of nodes which do many local requests have too many long connections and not enough short connections, so optimising for their local traffic ("long connections", across the keyspace), but ruining routing from a global point of view. The proposed solution (by TheSeeker IIRC?) is to not path fold close to the originator. And further it improves security since the data source can never directly see the request source. So while random routing on requests would cost us some latency, it may well improve performance in other ways. IMHO it's worth a try, but it could be a bumpy ride.

In other news...

Summer of Code has finished. Chetan is keen on getting his work on transport plugins merged and it looks like he's gonna stick around for a while, yay! Sometimes we pick up devs from Summer of Code, and when we do that's always awesome. Operhiem1's work on probe requests has both improved security (the old probes gave away far too much data) and given us more useful data to chew on. He and Evan have been working on simulations and confirmed that there is some sort of problem with routing, along with gathering data to inform simulations of possible solutions. I'm not sure what's going on with Pausb's web interface rewrite.

There have been a lot of complaints about disk I/O and resource usage in general. I have implemented one improvement which should be deployed soon, which is a small tweak to the "slot filters" which took over from Bloom filters (an important optimisation to the datastore); this should save at least one disk seek per block write. The bigger changes, notably queueing up a bunch of blocks to write to the datastore, will be implemented later.

We are still having problems with db4o corrupting itself, especially when there are uploads queued. The next thing to try is to upgrade the database. A newer version may have more bugs, as seen in the past occasionally, but it's the logical next step. Automatic defrag on startup has been turned off as a stop-gap, which may help a little with stability, but does mean the database will get bigger and bigger and bigger...

Me

Finally, I have two major sources of stress out of the way: First, after much deliberation, I decided I want to apply for Computer Science, the topic I'm most likely to do well in. I was concerned that I would not be able to make a significant contribution to sustainability, but it turns out there are lots of sustainability-related computing problems, see e.g. here, here, here and especially this blog.

Second, as of yesterday, I've completed my university application, although it may need some changes depending on college reviewing it. I still need to prepare for interview, including reading and an online course and especially working on my maths; I have three maths exams next year, and it will be important at interview. This means I will only be working part time on Freenet until January. After that I should be able to do more hours, up to starting at uni in September/October.

Freenet rant

Of course I may stay in networking; I will certainly have a look at networking/security issues at uni, and contribute to Freenet from time to time in the holidays (and maybe more than that depending on placements etc). From what I've heard, it should be possible to run Freenet in China, for example; always-on broadband connections are rapidly expanding, and don't have severe monthly transfer limits. And in the west there are many clear threats to freedom of speech online; Freenet is a useful additional safeguard. Regarding paedophiles, well, computers don't abuse children; paedophiles abuse children. Studies show that the massive expansion in the availability of child porn in the 80s in eastern europe had no measurable effect on rates of child abuse in general, although there is evidence that convicted abusers who have access to child porn online are more likely to reoffend. For western purposes, they are the canaries in the coal mine; if Freenet is safe enough for them, it's probably safe enough for your blog that you can't publish openly without getting fired. Go read the Philosophy page!

I wish there weren't so many paedophiles (and hatred groups, climate deniers etc) using Freenet, but it is an inevitable consequence of the absolute freedom of speech that it tries to provide. I do think in the West we probably have many years before the government would consider the sorts of radical measures needed to block a darknet. And for the record I'm not an extremist libertarian on other issues. I'm deeply skeptical on bitcoin, and the radical anarchocapitalist/agorist/cypherpunk agenda in general; I accept the scientific consensus on climate change even though it requires some limited expansion of the state to deal with it (e.g. in the form of carbon taxes etc); and generally care about social justice.

But I also believe that if we can make it harder for the state to say who is allowed to say what, that would be a good thing. Of course it's not the state, it's the corporations, or anyone who can write a threatening letter. Especially as technology, with its current centralising trends, and increasingly powerful AI tools, allows for potentially very powerful monitoring and censorship mechanisms, for example Facebook's automatic fishing expeditions for criminal activity. Or the china-style self-censorship climate of fear on the centralised filesharing sites, which was always inevitable; if you put all your files on third party sites, one dubious copyright claim and they're gone, as happened to NASA recently. If you put it on your own domain, you need to fund it (which rather harder if the credit card companies block you), and deal with DDoS attacks (which is harder if the big hosters won't touch you for political reasons), and then they block your domain. We need long term solutions before the censorship infrastructure online becomes so powerful, so blunt, so centralised, and with so few safeguards that it makes the censorship our constitutions are supposed to prevent look positively pointless. Freenet is one such solution, in some places, at some times.

PS if you can run the build verification scripts (in the github maintenance scripts repository), please do! Under UK law likely to be passed soon I could be forced to distribute corrupted builds, and on penalty of 2 years in prison not be allowed to tip anyone off about it. INACCURATE: See the next story up.

2012/03/03

Decisions good and bad

Education and time

Summary: Due to education issues, I have done very little work on Freenet this year. Delaying some exams should mean I can do a lot more work on Freenet very soon, although I still won't be full time as I was for 9 years.

Last August I decided to retake a couple of A-levels in order to get into a good university. This was partly because of persistent worries about funding of Freenet coming to an end, partly because I've worked on it for 9 years, with substantial progress but still nowhere near where we need to be, and never having really made the breakthrough in performance or numbers of users. The plan was that I would retake 2 A-levels over 1 year, work part time on Freenet in the meantime, and apply for university already having my grades.

This requires me to simultaneously:

  • Work a reasonable number of hours for Freenet, at least 10 hours a week.
  • Attend Physics AS and A2 classes at an awesome local college, and get a good grade in both, with exams in January and May.
  • Do Further Maths as a private candidate, with the assistance of my dad who is a former maths tutor, with exams in January and June, and STEP in June.

I have not managed to balance all these objectives: I have done much less work on Freenet this year than is necessary for my own financial and psychological stability, or for the Freenet Project's continued success. Meanwhile, FPI has accumulated a fair amount of funds - not enough for me to work full time, but I don't need to work full time. And we have been gaining users too - and many of the old ones are as enthusiastic as ever, and even one or two new devs here and there.

The reason for this is maths has taken a lot more time and effort than expected. There are a few parts of the core maths that I need to revise, most students do Further Maths over 2 years along with Maths, and I need to go up 4 grades (although I did an Open University course in 2001 which I did well at). Hence I am worried that while I might get an A, I won't necessarily get an A*, and I won't do well in STEP (although the AS modules I've done I probably did reasonably well in).

So the solution is to postpone the Further Maths A2 modules until next year (January and June), rather than doing them this June. This will allow me to work regularly, albeit part time, on Freenet, and do well in both subjects, hopefully doing STEP as well.

I'm still not sure what I want to apply for at university, but the above should help.

Freenet

The main thing I need to say here is that it is still the plan to try to make Freenet's development more distributed and less dependant on me. Bombe has put out a test build, and will soon release build 1406. I will still be involved in release management, but hope that Bombe can do some of it. We still need volunteers in many other areas, see the discussions on the mailing lists etc. And we have been having many patches contributed via FMS, which is really awesome.

There are also rumours about a port of FMS to a Freenet plugin, which would be very interesting. Meanwhile significant optimisations to WoT will be deployed very shortly, but they sadly don't fix all the performance problems for Freetalk.

There has been some discussion of how Freenet and Tahrir relate to each other. Tahrir is a project by Ian Clarke to build a distributed twitter-like system. It is especially important now that not only are the Chinese Weibo services doing keyword filtering and user traceability, but Twitter itself is doing country-specific censorship (what this entails is unclear, it would certainly include URL blocking as it does in the west re child porn).

One of the proposals was that Ian take a Summer of Code student for Tahrir under the Freenet Project Inc umbrella. IMHO the two projects are pretty close together in terms of their political goals, and even in the long run many of their technical goals. Tahrir needs routing, darknet, and eventually some sort of filesharing capability. Freenet needs microblogging. How exactly this unfolds is not yet clear, but the two are complementary for the time being, and shouldn't clash.

One thing that both Freenet and Tahrir need is fast distributed spam-proof keyword search (possibly with social aspects). I'm not sure how to do this in either case, but a solution would be immensely helpful for both.

Finally, we appear to have had some press coverage lately, the latest graphs are encouraging.

See you soon ... !

2011/08/06

And now for something completely Freenet... (Freenet)

Problems with build 1389

As a few people have been reporting for a little while now, 1389 has problems. I had not been able to understand it properly, and was busy fixing the last few timeout bugs related to NLM, but now having looked at some reasonably objective data generated by the auto-tests that run twice a day, I can confirm that something bad happened around the 24th of July (although I'm not sure what), and since the 1st of August or thereabouts, tests have consistently failed. These appear to correspond to build 1386-1388, and 1389, respectively. Build 1392 includes a few fixes for possible issues, mostly in 1389, two of which are fairly serious (see the detailed changelog for details). I'm not sure I 100% understand what went wrong, but there is good evidence that 1) slower inserts were very badly affected and 2) requests could get very confused after a timeout; these are both fixed in 1392. The most annoying thing is everything I've done in recent builds has itself been fixing various bugs - mostly timeout- and bogus reject- related issues which jeopardise New Load Management, and show up in testing on testnet. However, there is a reasonable chance that 1392 will solve the problems, and if it doesn't I'll have a look at what else could have caused it. Two other threads going on: First, important opennet fixes in 1391 - it might take a while for opennet to sort itself out (and bootstrapping is still rather slow, we always need more seednodes, this may be partly due to other recent problems). Second, newly bootstrapped nodes are very variable. Sometimes, they work really fast immediately. Sometimes (rather often) they take ages to bootstrap, as above. Sometimes they get a lot of transfer failures - this is not consistent, it seems to show up some days and not others, although it seems to have sorted itself out in the last test I did today, after initially being very bad. I don't think there is a serious low level issue with transfer failures (it would have shown up on testnet probably), but it's an outside possibility... I hope not, having spent many weeks debugging this earlier this year...

By the way, testnet is at the latest count a grand total of 14 nodes. We need more testers. It's not anonymous, but it helps me to debug Freenet, especially the harder bits (routing, updating, link level, etc), which can't be tested on just one node. See the installers for Windows, linux/mac. Please consider it if you have spare bandwidth, spare CPU cycles and spare disk space (it needs at least 5GB for logs, and logs quite heavily, so may need a fair bit of CPU).

New Load Management

Is nearly ready! And I really mean it this time! I had planned to deploy it last Monday, but last weekend me and ArneBab had a very helpful discussion about NLM and timeouts, which resulted in some significant changes (including making timeouts proportional to HTL, which will be deployed early next week), and then there was lots more debugging. It may be delayed further if 1392 doesn't solve the problems from 1389. But it should happen within a week or so. The main reason it has taken so long is that it has stirred up endless minor bugs (many of them related to timeouts and ordering of requests) that become important when we are predicting whether our requests are going to be accepted rather than just sending them and misrouting when they're not. Many other biggish changes happened in the process - fairness between peers (essential for security and for performance of low end nodes); new packet format (not strictly vital for NLM, but certainly related); the split between bulk requests and realtime requests (absolutely vital for NLM given it queues requests), and so on... Many thanks to ArneBab for helping me to understand the last few issues...

Other priorities

Other priorities remain roughly the same: Before 0.8.0, we need the darknet enhancements, which should make it far more convenient, and much better performance, to connect to your friends (and, automatically, but with an opt out, to their friends). Long term we need a global darknet because this is the only option that is not easily blocked, as well as being a lot easier to secure on many levels. We still haven't fixed the Pitch Black attack, but we have a friend working on it for an academic project, as well as a reasonably firm suggestion from our main old-timer theoretician (but this has not been tested yet). But darknet is the future, and not just because it's buzzword compliant ("social" ...).

One thing that we arguably need is for the new multi-container freesite insert code to be enabled. In 1392 it is possible to use it for transient site inserts, provided the client asks for it as an FCP option. We may make it default soon, but for huge sites (or for cases where you want to insert a medium sized site persistently), we need it to work with persistence. That means a lot of debugging, and especially de-leaking, which will probably have to fall to me ...

Help wanted: Run a search index Spider

If you can run the Spider plugin and generate a search index, anonymously, please let us know. It is very resource hungry, but it does just about everything for you - just load it, configure it, and then get the USK@ from the logs (wrapper.log) once it's inserted the first edition (after a couple of days). The new search code is greatly improved on the old search code. Meanwhile, on FMS, Jeriadoc (probably this hobbit) has written a new search index which may improve performance but we are still discussing how to make it scalable. In any case it should be made official soon.

More broadly, I still need to make Sone, and probably Jfniki, official plugins. This mostly involves code review and a little bit of admin. Beyond that, there are loads of bugs to fix, and I hope that Freetalk (p0s) and Freemail (zidel) will be ready for a release in September. operhiem1 is dealing with various small but important changes on the web interface (such as the first-time-wizard alternative discussed on devl), and infinity0 has done useful work on packaging and splitting freenet-ext.jar which has not yet reached its full fruition. Further optimisations will probably have to wait, along with client layer auto-backups, and there are zillions of bugs to fix as always... And if you can help in any way, let us know - apart from general coding, testing and statistics, translations, and of course contributing content etc, we also need with specific platforms, especially Windows (AHK experience a big bonus), and Mac.

Funding and 0.8

In approximately a week I'm going to have to ask for more funds. Including paypal, bitcoin (estimated), and bank, we have another $3539.81. If I charge my standard rate to FPI, this is good for around 5-6 weeks (those of you who think that's a lot of money remember I'm living in the UK, although I may be able to give FPI a discount now that I've paid off my various debts). Google has in the past been extraordinarily generous, but I doubt they will be again; even if they are, we should have something concrete to show for their last lot of money: Freenet 0.8! Right now we are some way away from that, but hopefully recent regressions can be cleared away quickly, opennet bootstrapping will stop being such a pain, new load management will work, and there will be enough time to debug as well as finish the few remaining critical features...

IMHO there is a good chance they won't come through this time. Last year I suspect had much to do with their battles with China. In any case it's not exactly a sustainable long term solution! Therefore, if you have money (or bitcoins) that you can give us, this would be very helpful. See the donate page or send coins to 1966U1pjj15tLxPXZ19U48c99EJDkdXeqb. Hopefully we'll get enough interest with 0.8 to keep me going for a little while longer and reverse the recent decline in the number of people running Freenet - especially if 0.8 is really good! If not, I'll be looking for other work, and Freenet will continue (I'll stay on as a volunteer, or work part time/intermittently depending on what I can get elsewhere). Right now there aren't many core developers, but hopefully that will improve with more users...

Personally, if I can't continue as is, I'd prefer something I can mix with Freenet, so either contract work that I can do mostly offsite or commute to from Scarborough, or perhaps a more permanent post with enough space that I can continue to contribute occasionally when it isn't crunch time (e.g. Google allow you one day a week for personal projects). I'm also considering university in October 2012... The fact that I don't have a degree makes it harder to get a job, but, from the right place, it could be well worth the effort in lots of ways; while I have picked up a lot of things, there are areas where I would clearly have benefited from the right courses... Of course, I have access to quite a lot via ACM and will probably spend more time on professional development in the near future... I would certainly like Freenet to achieve its main technical goals before I sign off on it, although I would like to do something else eventually, there are many fascinating things out there ... equally importantly, if we achieve our main goals for Freenet, hopefully that means we have a lot more users, content, and developers...

Long term

Some of my goals for 1.0 are:

  • Build a global darknet: It is much, much harder either to block or surveil a darknet than an opennet Freenet. Other tools can't touch us here - while they can certainly beat us on performance, or in some cases even security (depending on your assumptions). This also implies you should be able to do a lot more with your darknet Friends. But far more seriously it requires that people have friends who are willing to use Freenet: Freenet must gain a certain level of acceptability. Better performance, ease of use, and functionality, will help with this, as will more content (which is a self-reinforcing cycle). So an integral part of "build a global darknet" is "get a lot more users"!
  • Significantly improve security: We need to fix the Pitch Black attack, for any practical darknet deployment, and to regain some academic credibility, which right now is very low. We need to implement some form of network-wide onion routing, for the top blocks, predictable partial reinserts, or chat posts. This won't (at least not at an acceptable performance level) protect you fully against your darknet peers, nor will it make Freenet secure on opennet (if the bad guys can afford to connect to everyone, which they probably can), but it will make finding the originator of data you don't like (which is really what Freenet is about), much more difficult than it is now. Transport plugins (make Freenet traffic look like TCP, VoIP or whatever) are also important in the long term, for both security and robustness.
  • Make Freenet a viable option for some less-than-free regimes: Clearly Freenet isn't going to be very relevant in Iran, with them proudly regressing from a relatively prosperous state with large dial-up penetration and a fair bit of broadband into North Korea style madness where ordinary citizens can only access the national, disconnected network. However, China has very modern broadband infrastructure. And they've more or less managed to block Tor. Traditional proxy exchanges aren't really viable because as soon as they get popular they get blocked. A few tools manage this by constantly adding new IP addresses and retiring old ones, but IMHO there is room for Freenet. Plus, in the early 2000s, we had fairly wide usage, and that good reputation persists in spite of few Chinese actually using Freenet today. Obviously, making this happen has a number of consequences in terms of code. Most of them are important whether we are targeting China or the West however: Good user interface, very easy to use darknet (invite friends etc), something like microblogging, easy content upload, and so on. When we get the software right we can then reach out to people who might be able to upload original or filtered content.
  • Much better performance (and keep it when scaling up!): Freenet can still improve its performance significantly. New load management may or may not be a big step forward. Bloom filter sharing (not literally, we will probably use compressed sparse filters) is another important step that should be taken in 0.9 or later. Data persistence has actually improved considerably (well, until the recent problems), although I still regard it as a key part of overall performance; download speeds may be more important in the near future however.
  • Scalable chat: Freenet-based chat systems such as Freetalk/WoT, FMS, and so on, do not really scale at the moment. We can probably make them scale far enough to be useful, with a lot of hacks and optimisations. True passive requests, pub/sub and similar things could make this go a lot further. And maybe there are other possibilities for new network-level functionality that would allow fast access to large chat systems regardless of spam. Sone and other microblogging style systems unfortunately are not exempt from these concerns: While it may be possible to follow specific other posters efficiently, functions like looking for mentions of yourself, searching for hashtags, or accepting replies from anywhere, are hard.
  • Long term requests: Much of Freenet's user interface is inherently high latency: Starting a download, for instance. With a good user interface, this is not necessarily a problem. Many darknets may have problems with not all of the nodes being online at the same time, so it makes sense to be able to pass requests around while the originator is offline, and have data trickle, queue, or flood back as appropriate. This may pave the way for full blown darknet sneakernet, although in many fairly hostile regimes Haggle and similar things may work better. It is also intimately related to the sort of publish/subscribe features needed for scalable chat.
  • Really good distributed search and plenty of other plugins/apps: Everyone wants a perfect, spam-proof, uncensorable, anonymous search. Nobody really gets it, but we can surely do far better than we do today, especially for files (probably partly based on WoT). In broader terms, the interesting things you do on the internet are a relatively limited set of actions (chat, tweet, mail, etc); we should be able to provide plugins to do most of the interesting things that make sense over Freenet (and do them well, with enough customisability for the specific usage), and provide a solid, documented API to enable developers to create new ones. Actual javascript support, or sandboxed plugins, is unlikely due to security issues (it's possible but maybe not very practical). Good, easy to use, bundled tools for inserting content are also very important.
  • No more problems with filters: Freenet needs filter, or embedded viewers for all popular file formats. An embedded viewer is especially interesting for video, where we'd like to be able to preview a file before it is fully loaded, or even play the parts that are loaded. There are some tricky tradeoffs, and hopefully in the long term performance will be sufficient to allow something approaching streaming performance.
  • Really good user interface: Not only does it need to look good, it needs to be as easy to do what you want to do as possible, without insulating you so much that you make a catastrophic mistake. Plus, we need to fix the remaining problems resulting from web browsers - particularly, we need something like the web pushing option to avoid the problems we have with parallel image loads. This also requires good, easy, appropriate installers for all interesting platforms.
  • Development hosted over Freenet: We should "eat our own dogfood", as the saying goes. Furthermore, long term most, if not all, developers can and should be anonymous. Over-Freenet source review tools ideally would support an anonymous workflow and allow for people to be as picky or as casual as they like.
  • Really low system overhead: It must be possible to run Freenet in the background without grossly interfering with everything else you are doing. If it is necessary to shut it down for gaming, this should be easy, and ideally should be automated. Also, Freenet should ideally not require a high end system, because while Moore's Law forgives many sins (but not of disk I/O), cheap/old computers, computers in devloping countries, and various odd form factors (phones, tablets etc) may have less resources than we expect.

I suspect Ian would say that we only really need a good user interface, and we can call it 1.0 now! Others would certainly have other priorities. We'll see. Also, to those who say this is over-ambitious, they may be right, but a great many things we have planned for and talked about for years have finally happened - for instance, the new packet format, new load management, the changes to USKs, and many other things.

The more people we have, the faster it will happen - and if it doesn't happen quickly enough, it probably won't happen at all, because we either won't have enough people, or governments will do horrible things. If we succeed earlier we have a greater chance of surviving, and can perhaps influence things a little too by weight of opinion; if we remain a tiny network with a reputation for foul content simply because there isn't much content overall, we are much more vulnerable. Plus, even if we get continued funding, I don't actually want to spend the rest of my working life on Freenet - and certainly not as the sole core coder, main debugger, architect, coordinator, backup windows maintainer, and almost everything else! So many thanks to all the volunteers who have been helping recently!

Watch this space...

2011/06/09

The Big Climate Reconnection in Scarborough! (climate, local)

On June 3rd, a bunch of us visited our MP, Robert Goodwill again to talk to him about climate change. This included me and Jane (both Scarborough Climate Alliance / Scarborough Climate Action Network), Sue (a friend of Jane's), Liz (Green Party sympathiser), Kevin Allen (Scarborough Private Tenants Rights Group), Gilbert (Greenpeace), and one other person whose name I have forgotten. A Scarborough Evening News reporter turned up to photograph us on his day off, although there was some disagreement about who should be in the photo (we got everyone without Goodwill, and me and Jane and Goodwill).

Our main demand was about insulation. Apart from the obvious climate change benefits (heating is nearly half the UK's carbon footprint), Scarborough is one of the worst places in England for fuel poverty, with 23% of households fuel poor (spending over 10% of their income on energy). Also, a lot of homes in Scarborough have solid, hard to insulate walls, and are rented. The government's Green Deal should help, but despite optimistic projections they refuse to tell us how many homes will be refitted with modern insulation. The Warm Homes Amendment requires the government to tell us exactly how many homes will be upgraded, which should help to build the local businesses involved, allow us to hold the government to account, and ensure they do whatever it takes to meet the targets they have already set themselves on fuel poverty and climate change, along with fairly strong targets for the 2020s, which we were going to ask for this time. Mr Goodwill was helpful, explaining how such an amendment might make it into the bill. He has a good track record, but I suspect as a whip his hands are tied a bit. Last time we met him we asked for a minimum insulation standard for rented homes - which was granted in recent weeks, along with fairly strong targets for the 2020s, which we were going to ask for this time.

Our other main demand was that the government support funding for poor countries to adapt to climate change and develop their economies with less fossil fuels than we have. We can't afford 9 billion people who all live like brits, let alone americans, and most of the countries which are already being hit hard by climate change are too poor to adapt, and have very low carbon emissions. This will help unlock the negotiations, and is a matter of justice. And the governments of the world have already promised $100B/year by 2020 (but haven't anywhere near delivered it yet). This will have to come from new global taxes on flights, shipping, and banking (the Robin Hood Tax).

We also talked briefly about local action - local councils can achieve great things when they want to, but with budgets slashed, most won't unless it is made a core responsibility; it falls to the bottom, in spite of the chance to create jobs locally in insulation, transport, and other green industries, and the people involved get the sack. They should also have local carbon budgets and the funding to achieve them.

Beyond that, we talked about various things, including high speed rail, electric and hybrid cars, heat pumps and the recent carbon targets. We especially talked about transport, since Mr Goodwill was shadow roads minister. It appears that he was responsible in part for the abolition of speed cameras. Oh well, lost that one (more important losses are the Green Investment Bank's inability to borrow until 2015 and the farcial changes to solar panel subsidies - both will cost time and jobs). But overall it went pretty well, we kept the pressure up and discussed the issues. He pointed out, rightly, that the greenest mode of travel right now is the coach. But by 2030 all our electricity should be green, so the train will be even greener than it is now. This could be done with all renewables - mostly wind - but the government wants 40% nuclear.

Next month we will be jumping in the sea waving placards demanding that all net carbon emissions in the UK end by 2030. This is quite possible and won't require us to go back to the stone age - in fact it will create jobs, although we may not be able to fly much. See you then!

Gory details (longer account written sooner after the event)

Robert Goodwill is a frontbencher and a government whip. He generally votes in favour of action on climate change, has a good grasp of the issues, and gave us a good hour of his time today. On the other hand, he didn't commit to any specific actions resulting from the lobby. So on an objective level perhaps we achieved rather less than we might hope for (okay, I was tired!) - but as a government whip his options are probably pretty limited. In any case we made it clear that we care, and discussed various climate issues. An american president once said "I agree with you, I want to do it, now make me do it" - whatever an MP's instincts are, keeping up the pressure and linking it with constituency issues is a good thing. Still, we may have not been robust enough and certainly erred on the side of discussion of everything and anything rather than extracting specific commitments. On the other hand, they may not have been forthcoming had we pushed for them (being a government whip limits his options). Still, I'm sure it was beneficial on some level. Another criticism I would make is I talked too much!

After preliminaries we started with our core demands, starting with the Warm Homes Amendment, which requires the government to set a target for how much insulation will actually happen under the Green Deal and related measures, to provide confidence for the industry that will deliver them and ensure the government makes sure the policies are in place if things go wrong, and is supported by everyone from the TUC to the FSB to the FMB to Tearfund. He told us that it's unlikely that a non-government amendment will be accepted, although it might slip in in the Lords. I touched on the fact that there may not be enough funding for the Green Deal to insulate everything (it was in the press recently but maybe I shouldn't have mentioned it). Kevin raised various points related to the rented sector: A recent victory (which was one of our demands last time), the minimum insulation standard from 2018, may be diluted by poor enforcement related to difficulties over tenants rights. Goodwill is a landlord, and government generally wants to avoid extra regulation on landlords, but it may be possible to improve legislation. He also pointed out that people tend to be fairly good at reporting problems to councils, although of course all councils complain of resource problems!

We discussed adaptation funding (although we didn't mention the World Bank). This is essential for progress in the international negotiations, and given the difficulties with funding (e.g. the recent fight over the 0.7% aid target), will likely have to come mostly from internationally agreed taxes on banking (which he doubts will happen), aviation and shipping (which are less clear). I pointed out that high speed rail will probably make only a small difference, but we were broadly supportive of it. We did not extract the commitment to write to Andrew Mitchell about this, although that was an official goal of the Big Climate Reconnection.

We briefly touched on local council action: the Big Climate Reconnection demand is that climate action be made a core responsibility of councils; Friends of the Earth goes beyond that, asking for Local Carbon Budgets. Robert's view is that this is largely a matter for local lobbying - i.e. that there isn't a need for any statutory prodding. Clearly this is a mistake (FoE is right about local carbon budgets), but loading extra duties onto councils (no matter how good they are at them - some have done brilliantly) without extra resources is always going to be a problem - and extra resources are unlikely. However, in principle, it is wrong - it will get pushed to the bottom and the responsible people will lose their jobs, which is a shame as councils can do a lot when they want to (and have the resources to). Taking money away from councils while "empowering" them is a rather curious aspect of government policy. Sadly we didn't raise these issues in detail...

We discussed the carbon targets. The CCC says the 2020s targets will require by 2025 that we have 2.6 million heat pumps and 31% of new cars are electric. The former seems unlikely: They are technically eligible for the Green Deal, but are likely way too expensive (unless prices come down) for the limited subsidy available over that period (they certainly won't pay for themselves at any realistic interest rate without a subsidy in most cases, although there are corner cases such as people with a convenient lake). Goodwill said that's likely to be up to the market for the time being. The government seems to hope that prices for solid wall insulation will come down with scale, so the under-resourcing of the Green Deal/Energy Company Obligation will be transformed into a comprehensive and rapid refit; if they are right, why not set a target as per the Warm Homes Amendment? We'll see. We briefly touched on wood and other biofuels - he seemed to support the general consensus that only a few biofuels are sustainable, while pointing out that some are.

We talked about transport for some length (Robert being the former shadow roads minister, and transport being a non-emissions-traded sector along with heat i.e. more amenable to government policy). He doubts that the target above is feasible. There is work going on on infrastructure, and batteries will be better, so we'll see; it is probably a question of technology and price, we didn't discuss specific subsidies.

We later discovered that he was partly responsible for the government's abolition of speed cameras, and grilled him a bit about this, discussing some options including limiters and traffic lights linked to speed cameras; nothing that was said makes me doubt my view that abolishing speed cameras was a purely populist measure! He is in favour of widening the A61 at least in parts to avoid overtaking safety issues, as were some of the people present (there's a rule that you should avoid disagreeing with each other - but it's harmless enough here given the scope and time). We discussed transport targets, and hybrids, which hold some promise, but improvements in efficiency, just as improvements in traffic congestion, seem to be mostly eaten up by increased mileage, whereas electric cars avoid this; he seemed to accept this point, and pointed out that roads are important for the economy - a point that we didn't dispute, although I believe we discussed rail freight at our last meeting in November. Some of us are committed (and bordering on militant!) pedestrians, some are clearly drivers, but there is little point in picking a fight on that.

Robert pointed out that the coach is the greenest mode of transport right now, although this will change with green electricity. He seemed to accept that it's likely that fewer people will have cars in future and the long term trend is for petrol/diesel prices to rise. He said we can't just ban petrol cars - I countered that we can do exactly that, but only once there is a realistic alternative, which unfortunately most electrics at present are not. He still believes in the hydrogen economy. I don't, but I hope he's right. It would have been a good point to poke holes in the government's cuts to public transport in general, especially buses and coaches, but also while HS2 and some electrification is going ahead, rail is continuing to get more expensive and needs more help. As I've already mentioned we weren't all that combative, and I'm not convinced it would have been all that helpful - the government isn't about to reintroduce the subsidies it just cut. It was certainly worth talking about transport, we might have done a little better but it's still useful for him to hear from us.

As we pointed out, the government has done some great things and some terrible things. Great things include the new carbon targets. Terrible things include the Green Investment Bank being unable to lend until 2015, which we discussed a little (and he gave me a paper), and the shambles of the solar PV reform. He mentioned a local business involved in PV (having earlier mentioned a local business recycling plastic), and was sympathetic to both concerns - in any case there isn't much to be done about either now. We agreed that the objective with PV is for the technology to develop and become cheaper (some PV companies expect radical improvements) - but it is necessary to keep the pressure up with subsidies for the time being, especially with Spain at least abolishing them due to the bail-out. There are also worries about rare earth supply. It was pointed out that the size limit on PV prevents larger, more efficient installations.

We discussed nuclear and the grid. I explained that intermittency generally can be managed: Given big interconnects, a small amount of storage, and smart appliances, a 100% renewable grid is possible (according to a number of studies, both here and elsewhere), and we can and should export renewable electricity as we have the biggest offshore resources in Europe. And offshore wind is coming down in price quickly, unlike nuclear. Jane made the usual attacks on nuclear - Robert pointed out that Canada is much further down the road of what to do with its nuclear waste than we are. He is pro-nuclear, and broadly supports the CCC's vision of 40% nuclear, 40% renewables and 20% coal with carbon capture by 2030; he is a little doubtful about carbon capture (so am I!). We agreed that as long as the grid is 100% low carbon by 2030 we'll be happy.

So all in all, we got together and reminded our MP that some of his constituents really do care about climate change, that fuel poverty is a huge issue here with 23% of residents affected (as mentioned in the letter we handed in), we discussed some specific ideas in accordance with the Big Climate Reconnection, and many related things, and gained some new contacts - notably Sue, her husband Ross (who I met later), both of whom have volunteered for our next event, and Liz. And we hopefully got some local press coverage (probably on next Thursday's Green Page) - which in Scarborough is pretty easy, but has an impact.

Next month, we will be participating in the Campaign against Climate Change's Zero Carbon Britain Day. This will be purely a publicity stunt, aiming for some press coverage and maybe to chat to a few people; we don't currently have the resources for a full blown outreach event. On the 16th of July we will be wading into the sea with (waterproof!) placards spelling out "Zero Carbon Britain/Scarborough by 2030 Before It's Too Late"; the photos will be circulated in the local press and on CCC's national website. We now have half a dozen volunteers and a photographer, so this should go well, and should raise some attention on the need for radical action. We are not solely concerned with sea level rise here, although that's a big concern for a lot of countries, but with all of the impacts - many of which are happening today, and will get much worse. This is mainly aimed at the local people (and the national audience via the CCC's website collage), but pushing a radical but technically feasible demand is a good thing. The government targets are for 60% by 2030, including full decarbonisation of the electricity supply. We can and will ask for more - 100% by 2030 is feasible - but it is equally important to ask for specific policies, such as the Warm Homes Amendment. Aggressive long term targets are great, and form the framework for short term policies, and aid in holding the government to account for broken promises, but it's mostly the short term policies that will actually change things in the UK.

Globally, we're all waiting for the US - but the more we do, the better, partly because it builds technologies (which is not only a matter of basic research but also of deployment), partly because it shows what is feasible, and partly because we have a responsibility to act about our part of the problem. And the US is not the only part of the negotiations: Most of the Africa block and the Least Developed Countries and Small Island States support a 1.5 degree target, which technically would be extremely difficult, as they will be hardest hit - and are being hit hard already. Adaptation funding is critical for them: It must be raised in a predictable way and in sufficient quantity (hundreds of billions of dollars per annum by 2020), and dispersed fairly in accordance with need to adapt and grow out of poverty sustainably and lack of ability to do this themselves. And then there's China, which has recently improved considerably, investing vast amounts in the new renewable industries that we should be investing in, and for much the same reason - to be a market leader, but also because they want to grow sustainably (and are openly admitting that some aspects of growth may need to be managed to avoid problems). They have a long way to go, but they're a long way from where the US is. All we can do is do our part locally, push for stronger targets in the EU (which fortunately the UK government is doing, at least in public), and sort out adaptation/mitigation funding for those hit hardest who are least able to adapt. Or we could implement a comprehensive trade embargo against the US, but unfortunately they're such a huge part of the economy that this won't happen; and if it did it would probably cause all manner of problems (and not just economic ones). Better to do what we can. If we despair and do nothing, it will certainly be significantly worse.

To clarify that last point: Two degrees is an important threshold. The IEA says we're probably going to miss it. Loads of analyses say we're probably going to miss it based on current international targets (these need to increase dramatically!). Even at 2 degrees, a lot of people will be hit very hard. The risk of feedbacks and a "point of no return" increases with the temperature rise (with some of them already having an effect, for instance methane and carbon dioxide releases from melting permafrost). It is essential that we do everything possible to avoid a 2 degree rise in temperature. In fact it is essential that we cut our carbon emissions as quickly as possible - but this is speaking globally, so developing and deploying technology is more important than any primitivist urge to end industrial society that no other country would follow (behaviour will have to change quite radically though e.g. air travel is unacceptable, period). The fact that we are close to the edge doesn't mean we should give up, it means we should double our efforts. Of course we should think about adaptation - many countries need to adapt right now - and IMHO also about geoengineering, as an expensive and risky last resort (ranging from expensive and very slow measures such as extracting carbon from the atmosphere, to expensive, fast and very risky measures such as engineering a mini-nuclear winter) - but it will be vastly cheaper to cut our carbon emissions now. And it may even grow our economy - and in a sustainable way, unlike the financial services and house prices bubble!

So what's going on in Freenet? (freenet)

Current priorities

Freetalk: Xor is still working on optimisations. Freetalk and WoT currently use far more resources than they need to. Some database related changes have been deployed already, then hopefully he will be working on finishing the event-driven changes that will avoid a lot of unnecessary work (disk and cpu) in Freetalk and also fix the problem of messages appearing and disappearing (which I've seen recently and others report as a common and major problem). After that, hopefully some work on scalability, so that people who you've given a high trust get polled more often. All of this is necessary before we release Freenet 0.8.

New Load Management: There are still a few more bugs, but this is rapidly approaching a point where we can deploy it. One difficulty is I am still seeing very high latency on some requests. New load management introduces queueing to make sure we can get to a reasonably good route, but the impact should be limited. Having been unable to work out the likely impact from first principles I will have to write a simple simulator soon to estimate the likely impact on latency. We may keep old load management for realtime requests and just use new load management on bulk requests, which are more tolerant of higher latencies. Overall throughput should improve, and when it is applied to inserts, data retention should improve significantly. And it should also improve security, making Freenet less vulnerable to DoS attacks via spamming requests (although this is greatly reduced thanks to last year's fair sharing changes already). Hopefully new load management will be sorted out fairly soon i.e. within weeks not months. Assuming we can make it work, this is the ultimate answer to NowWhat's thoughts on performance.

Darknet enhancements: Much more needs to be done. This is important for 0.8 because opennet is insecure. Recently we have deployed code to allow you to set a trust level and visibility level for each of your friends; this will be used in the darknet enhancements, the core feature being connecting to friends of friends, if they are visible (which enables better connectivity, better performance, and better invites). We really, really need to build the darknet!

Library search plugin: It has become increasingly clear that presenting users with a search box which takes ages, usually fails, and occasionally causes the whole node to crash (because they searched for a too popular term) is unacceptable. We need to fix this. And if we can't fix it, we should remove it.

User interface enhancements: Two volunteers are doing good work on this, including operhiem1's recent changes to make it easier to choose folders and where you want to save a download to. Pouyanster will be working on this too, his last change being the country detection code. ArneBab and others have been working on the themes.

Freemail: Zidel is working on this for his Summer of Code project, including giving it a proper user interface, Web of Trust integration and anti-spam measures, and lots of bug fixes (probably including shorter addresses).

New freenet-ext.jar and 1.6 support: infinity0's (Ximin's) work on freenet-ext.jar has finally become official, although there is another stage yet. The next stage will involve breaking up freenet-ext.jar into many smaller jars, bundling a few extra libraries, and merging some code dependant on them (including new filters and code to reduce our disk I/O footprint on Windows).

Plugins: Sone needs to be made official as soon as possible. It's an important piece of functionality. It also needs lots of optimisation work. We should also think about jfniki.

What about China and Iran?

The rumours that China has deployed a whitelist system that kicks in once you have exceeded some threshold for international traffic so far appear to be bogus. However, it is vital that we build a friend-to-friend darknet in China. It appears that generally connectivity is pretty good, and in the past we've been very popular when there was a lot of chinese content on Freenet. We need more, and we need easy to use and ideally close to realtime chat or microblogging systems. Making it easier to mirror large sites onto Freenet would probably help too.

Iran is talking about having its own national internet, with only critical businesses allowed access to the external internet - joining such places as Burma and North Korea. We'll see. A friend to friend darknet could of course function on a national intranet, but if they accept that level of collateral damage they probably are quite capable of wiping out a darknet too. On the other hand, the economic impact is likely to be so huge that they will be a useful example to the rest of the world of how not to regulate the internet - but that's not much of a silver lining.

What about performance?

People are still complaining about Freenet having less performance than it used to. Inserts are, apparently, very slow, and downloads are still slower than they used to be. This may be partly because of better data retention, and hopefully new load management will help in a significant way. Also, the network has been shrinking, and this may be disruptive. Obviously we want to reverse that trend, but the main way to do that is to get 0.8 out.

What about Tahrir?

Your guess is as good as mine. I wish Ian luck - building darknets that can work in hostile environments is a good thing. Although IMHO for China Freenet is probably a usable tool, or will be very soon.

What about Bitcoin?

After a fairly serious crisis, I concluded that Bitcoin is unlikely to survive long once it becomes legally impossible to exchange it for bank money. A criminal network could provide exchange services, but given what they'd be likely to charge, there isn't going to be enough demand: Agorism is not going to happen (much to my relief). I expect such a crackdown will happen within the next few years - just as it did with e-gold and all the other pseudo-anonymous online currencies. New Scientist ran an article about it this week, apparently the total currency supply is worth $50M. I am also of the view that Bitcoin's dollar exchange rate is somewhere between a bubble and a pyramid scheme...

In principle, money is a form of information - but there is a difference between "freedom of expression in any form" and "freedom of exchange of money". But it seems highly unlikely that Bitcoin will cause any major damage (money laundering, open markets for assassinations, illegal porn and terrorism, tax evasion etc), because it can't survive illegally: There isn't enough demand for anything you can get through bitcoin (given the likely tens of percentage points margins) to justify a large criminal network exchanging it.

Do we understand Freenet?

Sort of! My reply to a question on FMS follows. A lot of things in Freenet are just a matter of good engineering (i.e. having enough resources), but some are not. Here goes:

Do the developers themselves still understand all their code, their design and the architecture of Freenet? Did anyone ever draw sketches, diagrams or similar? Can anyone explain the internals of Freenet to other new people who want to get involved? There seem to be that many bugs in Freenet, that I hardly know where to start to begin with. Bugreporting, but what area, code dissecting but what parts, and the list goes on and on.

There are some aspects of Freenet that are at least a little empirical/experimental in nature. Given limited data gathering we are not very good at this, but the nature of the beast is that we don't necessarily understand every detail in advance. This is especially painful given that all our theoreticians leave after a while, so we often don't have adequate simulations (oh and I'm not allowed to make simulations or people get really upset and in any case I don't have time).

Sometimes we just have to try things. In the case of RecentlyFailed, we have had the theory more or less understood for many years (but not simulated), but not got around to implementing it until recently. It was necessary to implement it because we needed to significantly increase our capacity for USK subscriptions. It is one part of a package that will hopefully allow chat apps to be adequately scalable, which is IMHO critical. It WAS tested in very limited simulations (just to check that ULPRs work i.e. that on a small network if everyone has requested a key they will get it reasonably quickly after it is dropped onto one of the node), it was tested on the whole of the testnet, and some testing on the main network, but right now the testnet is less than 15 nodes at any given time, so it's not an adequate testbed for network level changes.

So we just deployed it to see what happens. Freenet is not yet 1.0, this change does not have any significant security impact, it seemed to more or less work in all the ways we were able to easily test it, and it was - as I saw it - absolutely vital in the short to medium term due to the likely expansion of chat applications and other USK polling and SSK requesting stuff.

On top of all that, even if the theory is perfect, there are two more problems:

  1. The network is much more chaotic than theory would assume. This is partly because of network congestion and other real world problems, but the main issue is that theory and simulations do not generally incorporate load management, or make very simplified assumptions about it. Practically, backoff and misrouting are common and cause problems. New load management hopefully will help with this, and it is being worked towards, but there is a tradeoff - more accurate routing (and therefore better data persistence, and probably better total throughput), versus some amount of queueing at each hop. Note that there is strong evidence that misrouting is the cause of poor data persistence: Triple inserts radically increase the persistence of data, even though in the absence of backoff and other misrouting, they would always go down the same path.
  2. The implementation can have bugs. The testing I mentioned above was an attempt to find any critical bugs.

Bottom line is we have severely limited manpower, inadequate testing resources, and some parts of what we are trying to do are extremely difficult. You can help by reporting bugs, by running a testnet node, and so on.

Recycled post consumer waste (freenet, climate)

Unfortunately, "The Great Global Warming Swindle" has come to Freenet. This "documentary" consists largely of recycled climate denier arguments which have to be debunked every time they are wheeled out again. It was originally made in the 90s and reconstructed with the same largely bogus experts in 2007, and aired by Channel 4 to grab some viewing figures. They added in some misrepresented quotes from a real expert, who subsequently complained about it, some extremely bogus and massively outdated graphs with mislabelled axes, and a total lack of any care at Channel 4 for anything other than viewing figures, and hey presto, a great propaganda piece that did wonders for public opinion - in 2007. A lot of effort was put into debunking it since then, notably a detailed formal complaint to the regulator. Now, maybe TV shouldn't be regulated - the americans take this view, and Freenet clearly isn't regulated. But, especially in an unregulated medium, you should never believe anything just because it looks authoritative. People believed it because it was broadcast on what they expected to be an authoritative, regulated medium. Google it, read the rebuttals from Real Climate, CCC's collection (including one from Sir John Houghton), the official complaint to Ofcom, the misrepresented scientist, and more; it's not all that hard to check the facts, but he who controls Google controls the world... All of this should not need to be repeated, and the documentary itself is in no danger of being censored, but the wierd community that is Freenet attracts conspiracy nuts of all kinds, as well as more unpleasant people. It is depressing to think that ultimately almost everyone believes what they want to believe and the facts be damned, but it appears to be the default state of human nature: for instance, a great many americans believed that Saddam Hussein was behind 9/11 for years afterwards.

Check the facts yourself, believe some specific person, or follow the consensus. Those are your options. Me, I prefer to check the facts, and if I can't do that I'll usually go with the consensus. Of course, that's the theory - in practice I'm amenable to social pressure like everyone else, but only up to a point. For instance, I tolerate my friends' belief in homoeopathy to the point of using it as a first line - but I don't avoid medical treatment as a result, because most likely homoeopathy is a placebo.

The sort of conspiracies that would be needed for climate change to be a hoax are considerably larger than would be needed to fake the moon landings. Look at the weather over the last 10 years for instance: one event is not evidence, but the global trend is very clear, and as a result of that, there are so many record events one on top of another to make the point very clearly. Even the cold winters lately in europe and america (at the same time as record high temperatures, floods and fires in the rest of the world) are likely partly due to the melting ice caps.

2011/04/07

Hope after all? (climate)

Climate Progress has published a summary of rebuttals to the paper that spawned the New Scientist story that the last post was based on (I read this but not the longer paper). Basically it was arguing that there is a limit on how much wind we can extract from the atmosphere, that it's disturbingly low compared to current and likely demand, and that if we use more than a fraction of it we will see major disruptions to rainfall and similar problems.

Surprisingly, Climate Progress's prescription for clean energy is probably unrealistic as it is based on 450ppm and requires a fair bit of biomass, which is looking very doubtful. But they also link to this piece, which provides a short and sharp rebuttal, explaining that the models are bogus because modelling a wind turbine as a slight increase in the friction coefficient within a particular simulation cell (= vast chunk of the virtual earth) is bogus, and the estimates for maximum available power are also bogus.

One more thing: The New Scientist article and the related editorial hint at energy use itself becoming a warming effect. This is what a lot of the quotes on the Climate Progress page refer to. This is orders of magnitude below the warming resulting from CO2 emissions.

So perhaps we can have our cake and eat it after all. By which I mean (at least in Europe, with slight changes elsewhere) a Green Grid composed of approximately 60% wind, much of it onshore, mostly the existing hydro, a bit of solar and marine power and so on, with big interconnects and some storage and dynamic demand technology, providing cheap clean electricity. Not only is it the cheapest option, and therefore the easiest to get past politicians, it's also the best option in terms of keeping the environmental movement together.

Finally I'd like to clarify that I'm not a climate scientist. I'm a human being who cares about many things. This blog has always been about whatever I happen to feel like writing about. Mostly that's Freenet, but often it's climate, sometimes it's online politics or ordinary politics or religion (or anti-religion) or something else entirely. Speaking of which...

Brief Freenet update

I've updated the roadmap. You might be interested.

Currently I am working on USKs, to speed up Freetalk: Implementing bug 4660 (only tell the client when we're fairly sure we have the latest version), and then I will be working on date-based hints for USKs (we've inserted them since a few months back but we haven't used them; the result will be we can get up to the current version of a site much more quickly and cheaply). There appear to be some bugs in the USK polling code; I'm not sure if these are bugs in what I've just added or deeper issues, but they're gonna get fixed either way, and if it is the latter, then that's pretty important as p0s has mentioned stalls...

Then the next thing is a little more work on the connection level, related to network congestion. After that, I'm not sure whether we will need network-level changes for Freetalk (e.g. RecentlyFailed) in the short term; if not, probably some more on new load management, maybe finally get some form of it deployed. Or even skip it in favour of stabilising towards a release, since Freenet seems to be working reasonably well at the moment...

We now have only $6400 in the bank, meaning apart from what I just got paid ($5000, about 2 months), we only have another 2 and a bit months left of funding. So donations would be nice, whether by bitcoin or paypal or some other means. But content, code contributions and the like can be even more valuable. Seeya folks.

2011/04/04

All hell? (climate) (UPDATE: SEE FOLLOWING POST FOR REBUTTAL)

This article is extremely worrying. The claims it makes, and related issues, are that:

  1. The amount of energy that can be extracted from wind turbines on non-glaciated land (i.e. onshore wind, which is far cheaper than offshore, but you can probably include some near shore offshore in this too given how it is calculated) is roughly 18-34TW, because the total "free energy" or useful work that can be extracted without causing reducing returns through turbulence and related issues is a small fraction of the input from the sun.
  2. Total fossil fuel demand is around 17TW. This is likely to rise significantly.
  3. If we use the maximum wind figure from above, there will be serious climatic impacts:
    • Approximately a 0.5 degree temperature rise. This is distributed unevenly across the world with a maximum of 2 degrees in the map shown. This is not a serious issue, compared to what we can expect from 2 or 4 degrees warming, and it will not cause runaway feedbacks.
    • Serious impacts on local patterns of rainfall, heat flux and solar thermal radiation, comparable to a 4 degree scenario.

The consequences of this would be serious: Right now we only use 30GW or so of wind globally, but a lot of studies on how to power the world without nuclear power or fossil fuels relies heavily on wind. These are called the "green grid" (full article, diagram): If you link up all the national electricity grids, using high voltage DC where you have really long links, then for example you can provide Europe's electricity needs with a lot of wind, the existing Northern countries' hydro, a bit of solar and a bit of biomass. With the overall cost being reasonable - comparable to wholesale electricity prices, a bit more than coal - although requiring massive investment (which would have to happen anyway as most of the electricity infrastructure needs replacing soon). More storage wasn't strictly necessary but may be available through electric cars and new batter technologies, and "dynamic demand" helps a lot too; neither of these were taken into account in the early studies so there's probably some flexibility. One thing that was never clear to me was whether this included increased demand for transport and heating.

Many green groups had bought into that agenda quite heavily, partly because it doesn't need nuclear, which is problematic in a lot of ways, or carbon capture, which was only ever going to be a transitional technology and is much dirtier than the alternatives when you consider the full fuel cycle. I did personally. If it doesn't work, we have serious problems. First, what are we going to build our electricity grids on? (Considering only generation, assuming that big grids, dynamic demand and a little more storage happen)

  • Fossil fuels: Are clearly out. No fossil fuel generation should be being built in the West right now. Coal kills far more people than nuclear - and that's ignoring climate change and just counting particulate pollution etc. Gas is less polluting but its price is linked to oil, and it still produces a lot of CO2. Given we need to turn things around by 2015, building more fossil fuel plants is crazy. There are plenty of options for "keeping the lights on".
  • Hydro: Great for storage. Cheap. Most of the easy resource is used already, especially in europe. Large scale can be extremely dirty, flooding large areas, displacing a lot of people, and releasing a lot of CO2 because of the flooding. See e.g. the Three Gorges dam, the Ilisu dam, etc.
  • Wind: Still important, but we may have to keep it to a few terawatts if the latest study is correct.
  • Solar thermal: Promising, especially in desert areas; far more efficient than photovoltaics; relatively expensive, and worries about water usage in the desert.
  • Solar photovoltaics: Rather expensive. Very inefficient. Big subsidies in a lot of countries and prices coming down; some people in the industry talking about achieving grid parity in a few years, but that may be delayed by issues with rare earth metals. Current fairly generous UK subsidies are supposed to lead to it producing around 2% of UK electricity needs, so a long way to go; currently that is limited to the middle class, but the Green Deal loans might expand it.
  • Biofuels: A lot of "first generation" biomass is so energy intensive to grow that it produces more CO2 than just using oil. Even the "good" biofuels displace food production, resulting in more deforestation (btw the same argument can be made for organic food, assuming you don't have a sensitivity to specific pesticides). Second generation biofuels also looking shaky, it may be possible to grow some on "waste" land, although that often has some ecological function. Third generation (algae) looking very doubtful. Some biofuel can be produced from food waste, agricultural waste etc. There will be heavy demand from long distance road haulage, aviation, and domestic heating. Even if we eventually get rid of the first two the third may use up the entire resource of sustainable biomass: there just isn't that much waste, that's why most biofuels are grown.
  • Marine power - tidal flow etc: Promising, but similar issues to wind: It's a limited resource. How limited is not clear yet.
  • Geothermal etc: Mostly work only in specific places e.g. Iceland. Possible to transport it when not used locally via Green Grid, of course.
  • Nuclear: The inevitable conclusion is we're gonna need a hell of a lot of nuclear. Peak uranium will hit in a few decades but hopefully by then we'll have thorium or maybe even fusion. The catch is it takes at least 10 years to build a plant, the two third generation plants being built in Finland and I think France are way over budget, way over schedule, and regulators and others are asking questions about safety, although I'm sure they are vastly safer than the 40-year-old BWRs in Japan which would have worked fine if it wasn't for a once in a thousand years earthquake and tsunami (although whether that figure is viable in the future is uncertain). And then there's the inevitable public backlash...

All this is extremely annoying to radical but technocratic greens like me (as opposed to Paul Kingsnorth's deep ecology), and may do a lot of damage to people like the UK Green Party and Friends of the Earth. It's actually not far from James Lovelock, who is an iconic eco-pessimist. In the shorter term it will provide yet more fodder for those who want to do nothing. And most importantly of all, it will increase the costs of eliminating fossil fuels and avoiding a 4 degrees or worse world by at least a factor of 2 and probably more. Right now onshore wind is significantly cheaper than nuclear, and arguably much of the subsidy it receives is simply a correction for the odd way that the UK energy markets work (when the wind blows we don't need to turn on the expensive gas plants, and on those days even the coal plants get the same fee as the most expensive gas plant; I can't find the article on this, I think it was in the Guardian's Comment is Free), although it is no longer falling in price due to issues with materials. Offshore wind, nuclear and coal with carbon capture are all rather more expensive, and most of the rest (e.g. tidal) are early days and very expensive; solar photovoltaics even more so.

So the situation is extremely worrying. Doing enough of the easy things was meeting with a lot of opposition (for example, the EU will not move to a 30% target in spite of it probably increasing their GDP). With the new data, the great clean energy transformation that most of the greens had been asking for is going to be a lot more expensive and a lot less clean, a lot harder to argue for and likely lose a lot of campaigners. So it's all rather depressing. If it's real - there was another study a few years back that forecast some changes to wind circulation; it's early days, one recently published paper, it may be that better models yield significantly different answers for instance.

The article was published in the print edition of April 2nd, and on the website on March 31st; the paper was published in February. So I'm fairly confident it's not an April fools. And if it was it would be a horrendously irresponsible one: stuff like this tends to get quoted endlessly even after it's been retracted. Maybe I'm contributing to that process here ... I will certainly post an update if I hear that it is bogus.

Another interesting note, vaguely Freenet-related: The Met Office 4 degrees map appears to have disappeared. First it was moved to archives, now it's gone completely.

Which is a shame because it illustrated the problem quite well: First, temperature changes are not evenly distributed across the world. In the arctic, and in some parts of asia we'd see over 10 degrees, and 5 plus in a lot of other places; Africa hit hard, but also a lot of mediterranean europe. Second, it's not just about temperature: Temperature does directly affect food production, but it also drives more extreme weather, changes in precipitation, sea level rise and so on. Remember last year? No, not the record cold temperatures in the West (probably caused by arctic melting), the record high temperatures in much of the rest of the world and drastic consequences in Pakistan, Russia, Australia; severe droughts followed by flooding have been happening in large parts of Africa more or less every other year for the last decade. For some more doom and gloom, see Climate Progress's summary of big climate science stories of 2010 which were largely invisible thanks to a lot of bullshit about some stolen emails...

That map was regarded as authoritative, and everyone linked to it. And now it's gone, and it's pretty hard to find anything like it. What do you expect from the "greenest government ever" that has declared an end to the war on the motorist (in particular essentially abolishing speed cameras - which I hold my MP personally responsible for as he was shadow roads in opposition - and the fuel duty escalator)? Some people in the government clearly do care a lot about climate change; this is probably one of the better things resulting from the coalition. It still supports a 30% target for Europe, although this is not going to happen; it will introduce a floor price for carbon, but initially very low, and only in the power sector (heavy industry, despite its claims, has extremely favourable treatment, usually at the expense of the power sector, see Sandbag's work). It is finally introducing the Green Deal, which all parties supported, Labour having rejected it since 2005 and suddenly converted in early 2010. It decided to go ahead with the port upgrades programme, which is essential for offshore wind, despite strong treasury opposition. But that same department has ensured that the Green Investment Bank won't be able to borrow until 2014, and may now achieve a drastically lower 2030 carbon target than recommended by the official committee. Not to mention abolishing various quango's that did useful work one way or another, particularly in publishing embarassing reports. Blatently populist moves to cut air fares and fuel duty probably won't be the worst of it. But a pure tory government would have been even worse: The back benches passionately reject climate change, as do some senior front benchers.

Scary stuff, depressing. Just thought I'd share it with you, to try to increase the general level of understanding and spread the pain... I will still be doing the Big Climate Reconnection, as its demands are perfectly valid and even more urgent than ever (especially if the implementation will be more expensive than expected), but for now, on with Freenet!

2011/04/02

Lots going on

A whole lot of code has landed lately in Freenet:

  • Freetalk (chat forums system): First and foremost, Freetalk, after batosai started it off and p0s worked on it for several years, is now an official plugin for Freenet. That means you can load it from the (revamped slightly) Plugins page, in advanced mode (because it is still marked as experimental). When the next build comes out it won't be marked as experimental so you'll be able to load it even in simple mode, and there will be a button at the top of the forums page. Please try out Freetalk! It is a forums system based on Freenet. It is similar to FMS, but as a plugin written in Java it is much easier to install. It will eventually be installed with Freenet, with a hint to the user to create an identity. The other big improvement in Freetalk over FMS is it separates the Web of Trust anti-spam system to a separate plugin, meaning it can be used for all sorts of other things (you can create a separate identity if you want to, but at least the code is shared). Expect distributed searching, microblogging, real-time chat and more in the coming months. Sone already uses it, although that is not yet official, and FlogHelper uses it, which will be official in the next build, and of course Freetalk. One serious concern with Freetalk for many users is that it allows for "censorship" in the sense that it is essentially a distributed spam filter: Whether you see an identity depends on whether the identities you trust (and the identities they trust) trust that identity not to post spam. Eliminating spam while not allowing people to abuse that mechanism to censor people they don't like is a hard problem, but on Freenet, spam (or more correctly denial of service attacks) are a severe problem, especially on Frost. So far Freetalk makes a reasonable compromise - if you reannounce your identity, you can be seen by more people, as long as they haven't personally marked you down. There will be more safeguards and maybe eventually a "positive trust only" mechanism (but IMHO the only way to make that work will be for newbies not to be immediately visible to everyone). Ongoing problem... Of course your other option is just to create a new identity. As I've mentioned before, and so has NowWhat, there may be big issues with scalability for WebOfTrust and Freetalk, and also please note that it is not optimised yet - it may cause a lot of load on your computer.
  • FlogHelper (blogging tool): This is the first of the other WebOfTrust based apps to be deployed. For now it just uses the WebOfTrust to manage its keys, but in future you will be able to go straight from Freetalk to somebody's blogs and use it as a way of announcing them. FlogHelper is an easy to use blogging tool for Freenet, which should make it a lot easier for new users to post a flog. Thanks to Artefact2! Hopefully inside a month or so FlogHelper will support embedded Freetalk forums so that you can leave comments. It's already searchable via a similar mechanism.
  • Freemail (email over Freenet): zidel has done various important fixes. The new version will be deployed soon. Long-term, possibly as early as the Summer of Code, we'd like to rewrite this to integrate properly with Freetalk. Freemail does a number of things right - IMAP and SMTP interfaces, one-to-one channels between the people exchanging mail to prevent eavesdroppers seeing when messages are sent and so on - but it needs a web interface and anti-spam support, and it needs to be integrated into Freetalk UI-wise so you can e.g. send a private reply to a Freetalk post.
  • freenet-ext.jar #27 (infrastructure): freenet-ext.jar is an extra jar file (java executable) that we include with Freenet, which mostly contains third party libraries. The new version will be deployed soon. It tidies up the code significantly and includes various portability fixes. Following version will split up freenet-ext.jar into its component jars, which will allow us to update the Java Service Wrapper, which should help to make Freenet more reliable especially on Windows, as well as making packaging on Linux easier (this has already been demonstrated). Thanks infinity0, I wouldn't want to have to deal with this stuff myself!
  • Node to node file transfer: Thynix/operhiem1, a prospective Summer of Code student, has rewritten the UI around sending a file to a friend to use the same code as when you are uploading a file. You can therefore upload a file through your web browser (with severe size limits due to internal limitations and the amount of copying involved) or browse for a file on the node's disk. This will be in the next build.
  • Rabbit hole theme and security wording: Some time back ArneBab started working on changes to the user interface. They were criticised, at least by me, for being too wordy. He turned the GUI elements into a new theme, the rabbit-hole theme, which is now included in Freenet, and you can activate from the web interface config page. Sometimes lots of words can actually make things clearer, and the people who most need Freenet will often be willing to read them. So after much discussion, we have a detailed explanation of how to use Freenet securely on the first page of the wizard, but hidden until you mouseover it.
  • Everything else: And of course we have the usual translation updates (thanks sweetie, our ever constant German translator, but also thanks to the occasional contributor of Russian translations, an important language IMHO); all manner of bug fixes and minor tweaks; the beginnings of better support for ARM processors; and loads more stuff that I've forgotten about. See the changelogs for details. There are also some interesting developments going on out of the tree - people working on data persistence, wiki's and all sorts of tools.

It's great to have a lot of volunteers! It makes it a lot easier to get on with Freenet if you're not alone, and ultimately Freenet will only be sustainable if there are plenty of people working on it - without being paid. I get paid for this, out of the money that Freenet Project Incorporated manages to raise - most of it from Google in the last few years, but we can't keep relying on them (I doubt they'd give us any more if we asked for it!). We have raised a few hundred through bitcoin, probably more than that through paypal in the same period. We'll need more if I'm to keep going, which IMHO is a good thing at this stage. But long term, Freenet needs to be able to stand on its own two feet, and if necessary, to be developed underground - that is, over Freenet itself - by anonymous developers. Of course, the flipside is Freenet can't afford for me to be doing volunteer coordination, quality control and release management, I'm supposed to be a full time coder, and most of the stuff that volunteers work on - with the notable exception of Freetalk - isn't essential to the critical path to the next release. So there is some tension there, especially with the current policy that all released code will be briefly reviewed for security issues at some point; but mostly it works out.

Meanwhile, I've been working on low- and mid- level code, mostly debugging timeouts on the testnet. Most of that code is in build 1358. But there is a big chunk of it on master and testnet still, which will be deployed soon. The problem is, we could accept a load of realtime requests for a single peer - up to half our capacity - because a bit of burstiness is a good thing for realtime requests; in fact it's what makes realtime requests work. But the packet scheduler was still strict round-robin between all peers that have data queued - even if this data is only bulk data, which has much longer timeouts. The result was timeouts on the realtime data, even when there is no link level congestion. The initial fix was to just not overallocate to a peer. That worked well in terms of avoiding timeouts, but unfortunately it caused a significant slowdown especially on local inserts. The better solution is to change packet scheduling to go by deadlines - which has been implemented and is being tested on testnet. More testnet testers are of course always welcome, as is inserting and fetching stuff - actually testing testnet!

What next? Most of the above should be deployed early next week. Hopefully the low level stuff will work. Unfortunately there is more low level stuff that needs dealing with, particularly related to congestion control. After that, hopefully fatal timeouts will be a largely solved problem, so we can enable disconnect on fatal timeout on opennet. After that, deploying more chunks of new load management. If we eventually get that out the door, no doubt there will be endless debugging and tuning; the next release critical tasks will be the darknet enhancements and Freetalk-related SSK/USK changes. Hopefully we can get away with DBR hint support, bugfixes, and RecentlyFailed/failure tables request quenching, and won't need full blown passive requests until well into 0.9. Also some people would greatly appreciate deploying the new low-I/O datastore, and no doubt there will be lots more debugging. That's (some of) my TODO list, other people are working on other things. For instance, hopefully Freetalk will have good support for attaching files in the not too distant future, and (optional) image boards, as well as a lot of optimisation work.

I've had a lot of positive feedback lately. It appears that 1358 has made a significant difference. I don't know whether download speeds are back up to where they should be, but we seem to be moving in the right direction. I read Freetalk and FMS, so let me know your experiences of Freenet performance, and indeed any other problems you find; usability testing (get somebody new to install Freenet, don't help them unless they get really stuck, and tell us what they didn't understand or got stuck on) is particularly helpful.

Freenet lives!

2011/03/13

Wandering in and out of the wilderness

For a while I've been wandering in the dark, blocked unsure of what connects to what, when we will see major progress, whether Freenet will ever amount to anything, finding it difficult to get any work done. There are other factors but I believe a large part of this is directly about work.

However, I'm pretty sure I'm back on track now. After implementing the testnet, I've been able to identify a wide range of low-level to mid-level bugs, related to packets, messages, and requests. Most recently I have some significant leads on the trail of "why do realtime requests all fail when you do a bunch of them at once", which has bugged us ever since the realtime/bulk flag was introduced. I have fixed lots of bugs along the way, some minor, some serious. The next step will be more testing locally, more debugging, deploying it on the testnet, even more debugging, and then deploying the fixes in a new build. 1357-pre1 contains a lot of the fixes and I'd appreciate testing of that, but there will likely be significant changes before 1357 is released.

Hopefully the current phase of low level debugging will come to an end fairly soon, with significant performance and stability gains deployed widely. One particularly important issue is eliminating all bogus fatal timeouts. Timeouts are tricky, mostly because new load management needs us to know exactly how many of our requests are running on our peers. Originally we would send a request and then if we didn't get a response in a set period, we would move on to the next node. We still do this, but we also wait in parallel a longer period (long enough that if the node has just gone offline we will disconnect it), and if it doesn't reply in this longer period, we declare a "fatal timeout". Currently we disconnect from darknet nodes when we get a fatal timeout (they immediately reconnect). The plan was always to disconnect from opennet nodes too (and not reconnect), but there were lots of bugs causing bogus fatal timeouts. I have now fixed most of those bugs and see very few fatal timeouts; the next step will be to reinstate dropping opennet peers when they get a fatal timeout, which will allow us to proceed with new load management.

New load management has of course caused all sorts of side issues, some of which have been huge (like new packet format, or the work on timeouts I mentioned above). The testnet has paid off enormously with the low level stuff, well worth the effort of (re)implementing it. If you can run a testnet node please do; there is a separate installer, see the previous post.

Hopefully new load management will make a significant difference to routing accuracy and hence, eventually, all kinds of performance but particularly data retention. A network that stores data for 20 weeks is far more useful than a network that stores data for 2 weeks. This is a big issue. It will also significantly reduce the network's vulnerability to some kinds of attack. Last year it was a big issue, and I introduced some bugs which caused major problems for the network, and then decided (foolishly, but after having tried to investigate the problems for quite some time) that load management was the problem and got on with implementing it; eventually I figured out what the problems were. Some of the side-issues involved in implementing new load management have caused serious knock-on effects due to temporary disruption, or bugs in those changes. New packet format was a Summer of Code project, seemed to be pretty much ready, and had a very large impact, but it solved some long standing problems, increasing the average payload percentage dramatically especially for slow nodes. Since then I've spent a fair amount of time debugging it, but it's still been worth it. Anyhow, I'm sorry that I didn't realise that the problems last year were due to bugs; but I make no apologies for focusing on new load management, which remains vitally important, and which I really think will land in the relatively near future, in some form.

There has been a significant reduction in performance over the last year or so. Some of it is due to bugs created during new load management, but some of it may be due to FMS and Freetalk having scaling issues. Fundamentally Web of Trust style chat apps rely on outbox polling, which in a naive implementation (like the current one) can't scale. One reason we merged new packet format when we did was to increase the network's capacity for failed SSK requests, which these chat applications do a lot of. Eventually it will likely be necessary to make fairly significant changes both to the chat apps and to Freenet to support this - especially if Freetalk's being deployed by default results in a lot more active users. The chat apps will need to poll a trusted subset of the identities rather than trying to regularly poll everyone, and will likely rely on hints to propagate new versions. Freenet will need to provide more efficient polling - in the short to medium term this means implementing RecentlyFailed, which is related to the old failure tables that some oldies may remember - since we already get a notification if a key turns up within half an hour, we can safely stop requests from other nodes for the same key, as long as it's been requested a few times recently; if it is found, we will propagate it to those requesters too.

In the longer term, we will probably need to implement full blown passive requests. These will be a lot of work, but will be worth it, greatly improving performance for chat-style apps and anything else that needs fast notifications of updates, greatly reducing their impact on the network, and opening up new possibilities such as real publish/subscribe. They will rely on something close to new load management, and will pave the way for long-term persistent requests, which may make high-latency transports such as sneakernet viable. For instance swapping USB keys with your friends when you meet them, or getting your phone to do the same automatically with theirs; this would work even if the internet is completely locked down, or not available at all. Haggle does something like the latter, but does it in an "opennet" fashion, so it works as long as it's tolerated; Freenet could do it just with friends, allowing a large amount of data to be accessible within a longish period, but probably with fairly good bandwidth, minimising spam, allowing publish/subscribe broadcasts of new content, and so on, although there are many big challenges before we reach that point.

Will this be enough to make chat scale to millions of users? Probably not, although "millions of users" is a long way off! But combined with some tricks on the client side it should work well enough. Long term, Twitter-like functionality (especially if you're doing more following than searching) may actually work better with Freenet than chat forums do - although of course this is at the opposite end of the latency scale. Or is it? Ian's work on Tahrir sparked some interesting discussions on how to protect a small upload - a tweet, an SSK - where it doesn't matter if it's delayed by an hour or so. This will likely result in some basic onion routing in Freenet eventually, to protect chat posts, SSK top blocks for splitfiles, and so on. Which would enhance security significantly for some very interesting scenarios - specifically a large darknet where the attacker cannot easily connect to all nodes, but can work his way across the network at some expense - without costing much performance, because the CHKs are just inserted as normal. As I've explained many times, if your peers are compromised, so are you, and it will be a long time before we can do much about this, and if we can it'll likely be at a major performance cost. But if you use darknet, if the bad guy is initially a long way away, and if he can only hack his way across the social network slowly, compromising or social engineering at each hop, Freenet provides not only near invisibility and unblockability short of really drastic steps such as prohibiting all p2p connections (assuming we implement some basic stego), but also a reasonably interesting level of security. One key point in Freenet's favour is because it is high level, because it doesn't try to anonymise connections, it is distributed, robust (at least it should be if we store data efficiently and sort out the Pitch Black attack), and can use whatever latency the user is happy with - which varies from one application to another. For large downloads, sending the request out over some low bandwidth stego and then getting the data back on a USB key may be a viable option for a lot of people. Likewise, provided we have a good UI, blog updates don't mind latency that much.

Which brings me to Freemail. Zidel, after writing the new packet format (which I had to do a lot of work on afterwards, but his code was good nonetheless, a lot of what I did was extending it), has done a fair amount of work on debugging Freemail. This will be deployed as soon as I get around to it. IMHO Freemail is really key for a lot of users - maybe not by numbers, but by need and dedication, and it will help to cement the community and keep people onboard. Now it just needs a decent web interface - assuming he's fixed the bugs that made it very unreliable for me, which I think he may well have done.

Freetalk is at RC2, and RC3 is imminent, according to p0s; RC3 may very well be final, although there are some hard to reproduce trust recalculation bugs. When it is deployed we will really see whether the scalability issues I mentioned above are going to kill the network...

There are also discussions going on about wiki's and backups. I won't cover them here, for now - go read them on Freetalk and FMS (which is remarkably hard to setup for a not very techie newbie I met, we need Freetalk now!)

My view of release policy remains essentially the same: We should not Freenet 0.8 until the network has settled down, new load management has been deployed and debugged, and the long-promised darknet enhancements have been implemented. Ian would like to release 0.8 as soon as possible; I agree with him that we need a release soon, for lots of reasons, but we can't release with a severely broken network, I'm convinced it's worth completing new load management, and the darknet enhancements are both essential (in that opennet is far less secure than darknet and we don't want to encourage people who really need freenet to use opennet), and are relatively small changes. Deadlines are good for preventing feature creep, and we will need a deadline soon, but not yet; feature creep is not the main problem at the moment.

The planned darknet enhancements should be really helpful for anyone trying to use the darknet, and should help it to expand much more quickly; the key features are the ability to make your friends visible to your friends, to connect to your friends' friends automatically, to manually upgrade them to full friends (exchanging a password to confirm identities), and to invite friends to Freenet, with the invite containing the installer, the noderefs of your node and some of your friends, enough to get online even if you are behind a nasty NAT or are even offline at the time. We will probably add some opennet support too, in case the seednodes are blocked, although this will require turning on seed mode on many more nodes. Long term, we will need this: Opennet is just far too easy to block, or worse, to comprehensively surveil. We must build a global darknet, or Freenet is finished, or worse, provides a false sense of security. Which is not to say that opennet isn't useful in the meantime.

Why Freenet, again

The question of "Why Freenet?" has come up recently. I posted an explanation of my reasoning way down near the bottom of this blog, but I have changed a lot since then; most notably I have ceased to call myself a Christian and have developed a great skepticism about how some kinds of Christians view circumstances, anecdotes, and divine sovereignty. Plus, some of the places I'd hoped Freenet might help have started to erupt into revolutions without it. So do I still have an answer? Yes.

The simple answer is freedom of speech matters. A world with Freenet is better than a world without Freenet. Freedom of speech is vital for democracy. In Western countries, freedom of speech is constantly under threat, especially online, from both corporate and government agendas. Blocking websites, attacking peer to peer, and so on; he who controls Google, Twitter and Facebook controls the world as it is perceived by many. Plus, it is very hard to host a controversial and popular website. If it is on a small hoster, it is vulnerable to DoS attacks, and if it is on a big hoster, it is vulnerable to political interference; this we saw with Wikileaks (who are doing a great service to the world). Wherever it is it is vulnerable to legal attacks and paralegal harassment (think for instance anti-Scientology sites). And probably to copyright-based blocking too in the not too distant future - as the MPAA suggested during discussions on ACTA.

In the rest of the world, where democracy is a daydream, there is overt political censorship, and speaking your mind usually means at the very least ignoring Facebook's real name rule - and you can probably still be traced (and then imprisoned, tortured, disappeared etc). For instance, China not only has extensive (if eccentric) political censorship, they've also managed to block most of the Tor bridges (not only the public nodes). If it is technically possible to guarantee freedom of speech online, in the broadest possible sense of consensual data exchange, then it's a good thing. Of course, a lot of what Freenet is used for today is pornography (exploitative and degrading of both the producers and the consumers), illegal pornography (far worse), or casual copyright infringement (undermines efforts to build a real alternative), plus some legitimate but very distasteful political speech (holocaust denial e.g.; looking more broadly, the whole Climategate farce was seen as a "leak"). That's an unavoidable consequence of freedom of speech. The US courts recognise that free speech is sometimes anonymous speech, partly for historical reasons; I doubt that european courts do, but it is fundamental.

Fortunately most of the old technological-level threats to freedom of speech online have receded somewhat (e.g. trusted computing is nowhere), although new ones have popped up in their place. National blocking systems, legislative requirements on peer to peer and web services, three strikes laws (extrajudicial punishment!), and the increasing centralisation of just about everything on the internet, all clouds on the horizon. Freenet needs to be secure, but it also needs to be popular. It could perhaps be blocked by excessive force on the part of governments - such as blocking all peer to peer - but that is probably a very long way off, and may not happen at all, and in places where there is a greater need, some sneakernet-based network may be viable...

Arguably Freenet isn't for revolutions. It's for maintaining freedom of speech for the long period before a revolution, for making it expensive to lock down the internet in supposedly free countries, for testing some interesting technologies, and more.

Non-Freenet...

Bitcoin is not likely to be a serious threat to Freenet, nor is it in any danger of establishing an anarchocapitalist utopia (shudder). If mixing services make bitcoin anonymous, money laundering authorities will simply target the exchangers - freeze their accounts, their customers accounts, make life difficult for them, or regulate them and make it impossible to exchange coins that have been through anonymising services. The most likely government overreaction is to target the exchangers through the banks; not to take excessive steps against peer to peer software. Of course governments may do that anyway - but not because of bitcoin, more likely because of copyright, paedophilia, terrorism and all the usual nonsense.

We are still planning to see our MP again about climate change, in May. If you're in Scarborough, contact me! The organisers of the Big Climate Reconnection have said that this is fine (it had been expected for April); the Energy Bill and other things will still be being discussed and we can catch up on last time's promises.

The wikipedia articles on molecular nanotech (known in some science fiction as universal nanoconstructors) and mechanosynthesis are worth reading. The fact that such tools have been simulated accurately and the results published in peer reviewed journals suggests something will come of it eventually. Roll on the replicators! They would transform society, eventually eliminating most industrial production and agriculture, taking many jobs but also ensuring everyone's basic needs can be met; and that's above and beyond the vast benefits of nanotechnology, most of which we will see long before any such atom-by-atom assembly is possible. They would probably also require a lot of power, but technology would likely advance, and anyway that problem would likely solve itself real fast... There would also be a great pressure to suppress such technology, and/or to make it impossible to obtain designs for objects... Plus you have all the issues with synthesising dangerous items, but we'll go through most of them in cheap synthetic biology long before that point; IMHO it won't be the end of the world, the good guys are far better resourced than any hypothetical bioterrorists and the research to produce a useful weapon will likely be very expensive once there are good vaccines for the traditional (and not easily targetable!) enemies such as ebola, smallpox etc.

I repeat that I am not a transhumanist, I am not even a tech-optimist; I want the right technology: For instance, I strongly believe that we need to do something about carbon dioxide emissions right now. The fact that we might be able to pull carbon out of the atmosphere on a large scale eventually, with industrial air capture, biotech or nanotech, or that in 100 years we may have weather control technology, doesn't help us if we are only a few years away from the precipice (especially as there is a big timelag even once we have the technology): Feedback effects are large, somewhat uncertain (which is a reason to do something, not a reason to ignore them) and already beginning to be visible, and hundreds of thousands of people are already affected severely by climate change - by some estimates, hundreds of thousands are killed every year. Last year was one of the hottest on record globally (and in many places), and the UK's cold weather was likely an indirect result of the global high. And then, leaving the question of the future habitability of the earth aside for a moment, there are side benefits like clean air, health improvements, and avoiding peak oil - which, short of China's economy collapsing, is almost upon us. Although if peak oil results in using more coal - to make diesel or to power electric transport - then it will be an unmitigated disaster. Sometimes the market cannot see past its own nose, and it needs some prodding. Finally, pressing ahead with green technology - which means not only funding its development but also forcing its deployment - is good even in the short term because it creates jobs and industries, and is a large and sustainable new market that those countries who intervene in will lead in - as we've seen in e.g. Denmark and Germany, and as China is rapidly taking on.

For another example, Japan is illustrating once again that nuclear fission is dangerous; it is also expensive, a proliferation risk, and is not the best technology to balance out and back up renewables. The way forward on electricity has been clear for some years now, at least in Europe: A lot of onshore wind, a fair bit of offshore wind, huge international interconnects, the existing hydro, some Saharan solar if possible, demand management (appliances that know when electricity is cheapest), and a little bit of storage (for instance in electric cars). The old models ignored the last part of storage, and assumed some biomass; this is probably impossible, sustainable biomass is likely to be rare and expensive and reserved for aviation, long distance haulage and heating (hopefully the first two will be transitioned to rail as much as possible). Nuclear is a last resort, and probably not necessary at all.

Obviously there is far more going on in Japan than a moderately serious nuclear incident. I don't want to seem callous, although many horrible things happen in the world regularly which get less publicity and kill a lot more people - and many of them we are responsible for to some degree. I wish the Japanese people a speedy recovery, and hope that the consensus that global warming only weakly increases seismic activity is true...

And then there's Libya ... retaking Ras Lanuf is a long way from taking Benghazi, but it is very unfortunate, especially if Gaddafi manages to sell some oil (illegally). Looks like it will take a long time to sort out. No sign of progress in Iran, but hopefully the next election will trigger something. We'll see...

2011/02/26

Return of the Test Network, Part Two

The testnet has returned! This is a separate network, incompatible with Freenet, which has a separate repository, separate auto-update keys etc. This is ABSOLUTELY NOT ANONYMOUS. It logs aggressively, and all nodes connect to my central coordinator node. I can then request logs for any period, along with a few other status indicators, from any node. The network is intended for two tasks:

  1. Finding difficult bugs where you need to be able to see the logs from both sides. Particularly in the packet transport code and other fairly low level stuff, but also probably in requests etc.
  2. Testing potentially disruptive changes.
  3. Attack simulations, but only when explicitly authorised by me. (I.e. if I'm debugging load management I don't want people DoSing it at the same time just to prove it's vulnerable; obviously only well-behaved hackers will respect this rule, those that aren't will just attack the main network)

To use the testnet, you need to download the installer:

Windows testnet installer

Linux/Mac testnet installer

(It should be at least version 12)

System requirements:

  • Minimum 5GB disk space requirement for logs
  • Reasonable memory (it keeps up to 100MB of logs in RAM)
  • Logging may cause problems for those with slow disks or a lot else running / latency requirements.

Note that the testnet uses different ports for clients by default:

  • FCP on 19841.
  • Fproxy on 19488.

Also, Fproxy will stubbornly declare on every generated page that you are using the testnet. However there is no protection for FCP.

Opennet *may* work, as my testnet node is a seednode and listed in the testnet seednodes file. However it hasn't been tested yet. Also I haven't yet tested the auto-update, and given experimental code may be deployed, you may have to update your node manually from time to time. However the adapted update scripts appear to work.

Obviously, don't share anything illegal on the testnet. Logs will sometimes include filenames of downloads etc. So we might discover illegal files (by accident), and this would put us in a difficult position, not to mention making you traceable!

Don't share identities (FMS, site keys etc) between the testnet and Freenet either, unless you want them to be traceable.

The more technical folks might like to look into copying content from one network to another via binary blobs; otherwise just insert stuff etc.

While the network is small it may be necessary to set the store everything in the datastore option to make inserts work.

(You shouldn't use Freenet itself for stuff illegal under US/UK law either; if you tell us this we must refuse to give tech support, ban you from IRC, lists etc; this is long-established policy, based on the Grokster judgement).

If we have a few hundred people on the testnet, it should make it significantly easier to debug various low level bugs and test new potentially disruptive code before deploying it. So please consider installing a testnet node! Note that it may require some manual maintenance, and is much more likely to break badly than the main Freenet. Thanks.

2011/02/23

Freenet now accepts donations in bitcoin!

Freenet now accepts Bitcoin for donations. We've had various wierd donation options over the years and with very few exceptions nobody has ever used anything except Paypal (and wire transfers for big donations, which have recently become a big part of our income). However there was a lot of community demand (possibly partly driven by a speculative bubble), and it is technically a rather fascinating system. Nagging the devs usually helps; this is a basic principle, we are bad at prioritising (even me, who being a paid codemonkey has to prioritise). And if your nag comes with a promise to make a moderate sized donation, all the better! We've had 205 BTC so far. One BTC is worth just under a dollar, and exchanging them for dollars is somewhat tricky; it becomes a lot easier once we reach $800, so if you want to make a BTC donation now is the time! We will try to avoid keeping a large bitcoin balance since the price varies quite a bit (see here).

Bitcoin is not strictly anonymous, although strongly anonymous systems can be built on top of it without too much difficulty. Actually it provides complete transparency - all transactions are broadcast to all nodes, although there is some privacy as tracing accounts to users is nontrivial.

One big difficulty for me is that Bitcoin distributes the process of creating scarce currency (which is a good thing) by hashcash puzzles which get ever harder (which is a bad thing). There is a fixed schedule for the rate at which new currency is introduced (the inflation rate), which is 30% this year, 10% next year and falls rapidly, completely stopping in 2033. But in the meantime, assuming rational behaviour, we can expect (currency inflation) * (value of total money supply) to be spent every year on minting new coins by solving seemingly pointless math puzzles. This could get significant if the economy is large, although to be fair the banking system is also pretty significant. This is the main reason why I am somewhat skeptical about bitcoin. Mitigating factors: First, currency inflation falls rapidly. Second, green electricity is possible. Third, it may eventually be possible to make the pointless math puzzles into useful math puzzles (i.e. distributed protein folding or similar); I've discussed this at some length with the bitcoin folks, but it seems they've discussed it in the past, so we'll see. In any case I'm not 100% sold, but it's Ian's decision, we need the money, and there are many reasons to think we might get quite a few donations in it at least in the short term; and it is definitely an interesting technology.

Another difficulty is that it is deflationary in the long term. It is also deflationary when it is expanding faster than the programmed increase in the money supply, which is to say right now. Long term this may cause problems.

The main difficulty is that like all new and somewhat anonymous currency systems it's something a pain to get money out of it. There are a lot of options for doing so, but reputation is an issue, and in small quantity you have to go via other money transfer services such as Liberty Reserve; in large quantity you're risking more money...

In any case, if you have BTC, send us some! Our address is:

1966U1pjj15tLxPXZ19U48c99EJDkdXeqb

Including it here should help to authenticate it; if donating via the website, click the link to go through SSL; I have also posted to chat and devl a signed message including the above address.

If you don't have BTC, and you want to donate to Freenet (most of FPI's donations fund my work at the moment), you should probably use paypal, Google Checkout, or a wire transfer (especially for large transfers), as it is more efficient.

Another interesting alternative money technology is Ripple. See here. The main problems with this are 1) whether it would actually work (psychologically, risk analysis of our friends etc), 2) how to bootstrap it (this is the big one, my guess is it will eventually be bootstrapped from social networking sites when people start buying stuff directly through recommendations on such sites), 3) the fact that a fully p2p implementation is unlikely, due to intractable issues with timeouts meaning that nodes need to be online, and that perfect routes are essential if there are variable fees and interest; hence I'd expect maybe 1000 datacenter type servers exchanging full data constantly, although it might be possible to construct good routes in a distributed manner with a lot of data exchange...

Both ripple and bitcoin are essentially experimental at this stage. They could fizzle out or possibly fail spectacularly. Or they could perhaps be used for mass tax evasion, which is probably what a lot of the people behind Bitcoin want. Personally I believe avoiding taxes is unacceptable, but that doesn't mean that we should avoid all opportunities to make our currency systems more decentralised, more accountable, more local, more efficient (in different senses) and better at doing what money is actually supposed to be for (enabling people to exchange energy fairly). I have a fair amount of sympathy for traditional alternative money, but its flaws are well known; technology may offer some interesting new options.

In terms of politics, arguably a lot comes down to crowdfunding. For instance, UK political parties have a largely apathetic and greatly reduced support base, and rely largely on big donors - unions but especially rich people. One solution is state funding for political parties; this goes a long way to permanently solving the reasonable claim that all parties do what the super-rich tell them to do; unfortunately this is not going to have support both from enough politicians and enough of the people any time soon. The other solution is to energise the supporters and get a lot of small donations. Obama managed this in the US. UK politics would be a lot more representative and credible if this happened here. Unfortunately it needs a leader people can really believe in, and (s)he doesn't seem to exist. The broader point is that economic democracy matters, in lots of ways.

Freenet, climate and miscellaneous politics

Apologies for the lack of obvious progress on Freenet, the testnet will be operational soon, and then I plan to deploy the new Freemail which has a number of important bugfixes. Then it'll be low level bug fixes for a while, and then hopefully we can deploy new load management at last - or at least test it on the testnet.

On climate, we will be lobbying our MP again (Robert Goodwill), in May, possibly about the Energy Bill and post-Cancun situation. There are people organising public meetings but I don't have the time/energy at the moment. See The Big Climate Reconnection.

Oh and if you have a vote (UK), please vote for the Alternative Vote (Wikipedia definition) on May 5th. Tactical voting sucks, and AV eliminates the need for tactical voting while preserving all the good things about First Past the Post (most notably one constituency, one MP). Even if you always vote for a mainstream party within the top two in your constituency, you should seriously consider it. If you don't it's a no-brainer. It's true that very few countries use AV, but look at what they use instead, for instance Germany uses something like AMS and has effectively a 5 party system. IMHO that is a good thing: While there are disadvantages to coalition negotiations, there are advantages too (look at the coalition agreement), and two party systems suck: Neoliberal deregulatory ideology created the banking crisis, but a neoliberal-infested Labour (left of center) government was in charge, so the only option is to vote for the Tories, who supported banking deregulation even more enthusiastically and are even more in bed with the bloated and unsustainable City of London financial services sector. Even if you're approaching it from the other end of the political spectrum (like many of the ancaps I mentioned earlier) I'm sure you can still come up with reasons why two party systems suck. Of course, using AMS is Britain, as most small parties would like (for obvious reasons), and as is already used in Scottish elections, would have some larger issues. Most notably, constituency support becomes more of a problem (in such countries this is often handled at the party level), but also in a hybrid system the directly elected MPs would have constituencies that were twice the size they are now. Another option is STV, which can't be easily counted by hand on site (which is a legitimate security issue), and involves potentially ranking an awful lot of candidates - but has some interesting advantages, in that it is a reasonable tradeoff between proportionality, being able to vote for and against individuals, and constituency representation. AV has one of the benefits of fairer systems - no need for tactical voting, which will help smaller parties a little while not being fully proportional - with none of the disadvantages (although IMHO AMS or STV is worth the disadvantages in the longer term). Trivial forms of electronic voting are grossly insecure, although a paper trail may allow secure touch-screen voting and mitigate some of the disadvantages of STV; serious cryptographically safe voting is a long term project, there have been some interesting proposals...

2011/02/17

Return of the test network: Part One

Years ago, when 0.7 was first implemented, we had a "testnet mode". You can still find traces of it in the code. It reported node locations and connections to devs, and provided some limited access to logs etc.

It has recently become clear that there are bugs in the current code - mainly related to new packet format and new load management - that can only be debugged with access to detailed logs on both sides of the connection. It has also become clear that trying stuff out on Freenet proper is a dangerous strategy.

So I will be rebuilding the testnet shortly:

  • A separate branch, a separate release of the software, a separate auto-update key.
  • A big fat warning on all pages of the user interface.
  • All testnet nodes will periodically connect to a central tracker, which will test connectivity to their testnet port (a TCP port listening for commands to do things like downloading logfiles), and log basic stats and recent error messages.
  • Testnet nodes will be incompatible with non-testnet nodes, making up an entirely separate network, but will use the same key formats, so it will be possible to download binary blobs of content on the main network and insert it onto the test network.
  • All testnet nodes will need a minimum amount of disk space for logs, and there will be a hardcoded log threshold list (users can add more stuff to log but not block stuff on that list). We will probably also have a larger in-memory logfile buffer, so possibly a larger minimum memory requirement.
  • The wrapper will be mandatory because we will need to be able to restart (easiest way to deal with various problems e.g. shutting down routing if we repeatedly can't connect to the tracker). So will auto-update, probably.
  • There will be both darknet and opennet available on the testnet. There will therefore need to be at least one seednode, eventually.
  • To be absolutely clear: On the testnet, all nodes report to a central server, and developers (or theoretically anyone else) can connect to any node and download detailed logfiles. Thus, there is no anonymity. However it will otherwise be running the same code as the main Freenet, so allows us to follow bugs from one node to the next, and test risky changes before deploying them on the main network. There may eventually be control functions such as forcing a node to restart or to auto-update without using the UOM/UOF mechanisms (if e.g. we have managed to completely break the transport layer), but any such functions will be cryptographically protected in a similar way to current auto-update functions.

The immediate goal is to be able to see both sides of the story when something bad happens at the packet or message level. For instance I spent a long time trying to debug a problem where a node was acknowledging packets containing SSK requests, but was never sending an Accepted or Rejected at the message level. This could be a packet level problem or a message level problem, but the fact that the packets are acked rules out a lot of obvious possibilities.

The medium term goal is to sort out the remaining issues with new load management (particularly fatal timeouts). The longer term goal is to be able to develop stuff on master, then merge it to the testnet branch, have the testnet network upgrade, and see whether it severely breaks everything, before deploying large potentially disruptive changes. Hopefully the testnet will grow to hundreds or perhaps eventually thousands of nodes where we can test stuff.

Stay tuned!

2011/02/12

Sorry Somedude

Sorry folks again (pic from Newsbyte)

In the middle of a rather stressful week hacking on various frustrating Freenet problems, I discovered that FMS does the same thing that client authors have eternally done: It sees Freenet fail to answer on a request for some reason (either bugs or overload), and then shuts down the connection and opens a new one. In doing so it increases the total load and makes more timeouts likely. Although with recent builds FCP does actually pick up the disconnect pretty quickly, so the damage is limited. Anyway, I blew my top, saying some foolish things as well as pointing out the technical issues.

Right now FMS is the best stable, spam resistant chat system available on Freenet. This is why we link to it from the homepage. And Somedude has done a great deal for Freenet - without a spam resistant chat system, Freenet would have been in deep trouble long ago. He also helped find the big XML bug, prototyped a realtime chat client FLIP, and so on. I use FMS via TheSeeker's node because I don't want to run it locally.

One problem with FMS is it is not portable, because it is written in C++. It has in the past had both high level anonymity exploits and remote code execution bugs. The former could have easily happened in Java, the latter could not. But in any case we cannot bundle FMS, period. The lack of reliable private messaging on Freenet has been a problem with reporting bugs - this is a generic issue not only for FMS or for anonymously authored software but for all Freenet users; we need a reliable, working Freemail. Freetalk is based on a generic Web of Trust that will be used by a number of clients other than forum-style chat; between this and it being a Java-based plugin, IMHO Freetalk is the future, though people will probably still use FMS; I will likely use both.

To address the technical issues that started the rant above: I was aware of a problem with USK polling, which I had generally assumed to be due to a lack of capacity and unnecessary probing of early editions. The latter will be improved significantly for Freetalk and Sone and so on (but not FMS) by implementing usage of the date-based hints. Capacity was one of the reasons why I merged new packet format, which mostly seems to be working reasonably well now, showing massive improvements in payload at least, although it may still be related to the high level timeouts. Ratchet was working on the USKs issue so I assumed I could ignore it for now and focus on the numerous other problems going on...

It appears there is also some sort of FCP- or client- level bug which causes a whole bunch of SSKs to timeout. More information is needed to resolve this, I have posted on FMS about this. It might be due to losing FCP messages; it might be due to a bug at the FCP level (but it is doubtful IMHO as it is very simple); it might be due to a bug at the client level losing all the requests in a given bundle (I doubt it, it doesn't seem to affect Freetalk, but such things have happened in the past), or it might indeed be a capacity or priorities issue...

Current progress and other stuff

For the last week I've been struggling with opennet announcement being very slow, and for the last half of it, Update Over Mandatory. What was happening was that a large number of nodes got stuck at versions around 1320, but nonetheless had auto-update enabled (otherwise they'd stop announcing eventually). So they constantly announced, and tried to update. Unfortunately the UOM transfers were failing. That has now been fixed, as far as I can see. It is likely that we will regain a significant number of nodes, as well as the announcement load reducing dramatically. Plus, there were various issues with announcement itself. In any case, by 1352, all of this appears to be solved: announcement is fast, UOM is fast, it all works, and I no longer suspect attacks.

Also, fproxy performance seems significantly improved, and there is anecdotal evidence that downloads are up too. Inserts are definitely working significantly faster.

That just leaves the zillions of other bugs and some major feature work:

  • Realtime requests cause transfer failures cause backoff: Some of the realtime backoff is caused by higher level load management issues, but much of it is caused by transfer failures, which are probably related to timeouts.
  • Fatal timeouts causing disconnections on darknet: Some of this is higher level issues (message/request level), and there is code that can be introduced to try to deal with it there, however a lot of it is probably just timeouts.
  • Timeouts in general: We are getting an awful lot of timeouts. We may still have problems with the new packet format, but I doubt it, it's working reasonably well now. We may just be over-logging and over-reacting. Or something else may be going on. We will probably need both-sides logs to resolve this, which probably means a new testnet. It would also be worth thoroughly testing the queueing code, and possibly thinking about per-peer limiting based on known bandwidth (of course then we have to figure out how to prevent bad feedback problems, how to integrate it with new load management etc).
  • USK polling problems: When I have looked into this in the past it's mostly looked like a capacity problem, although there appear to be some bugs resulting in polling stuff that is earlier than what we've been asked for. Also, Freetalk and some other apps use non-date-based USKs (unlike FMS), so we need to make the date hints work. This would also help with freesites etc.
  • SSK/FCP problems: Need more information from FMS users.
  • New load management: We still need to merge this before 0.8.0 IMHO. The current ("old") load management system is fundamentally unsound, and the new one should improve security/robustness, performance, and particularly data retention (at least once implemented for inserts).
  • Darknet enhancements: This is another essential feature for 0.8.0. Opennet is fundamentally unsound. Darknet is the future. But it must be easy and fast. An interesting recent development: Fixing UOM had the side-effect of solving a long-standing problem with bulk transfers, so F2F file transfers are dramatically faster in 1352.
  • Local performance work, especially the store-io branch: This needs merging at some point. It needs testing too. More work on disk I/O is a good idea, maybe nextgens can be persuaded.
  • node.db4o auto-backups: This is in the "optional" category. It would help to avoid a lot of frustration for end users but probably won't make 0.8.0.

Long live Freenet, or something!

Have finished antibiotics for now, hopefully I won't need any more... Egypt is fascinating.

2011/01/30

Interesting times: Please test new load management!

There have been problems with the network recently. There are lots of theories floating about: a deliberate attack, realtime requests causing loads of backoff, ...

I have posted to FMS to try to gather information on what the main symptoms are. The main things seem to be very slow downloads (estimates varying from a factor of 3-4 slowdown over a long period, but with recent builds particularly bad, to a factor of 40 since new packet format), slowish fproxy, and slow bootstrapping.

Build 1339 separates backoff for realtime (fproxy) from backoff for bulk (everything else). This seems to help IMHO but others have said negative things. Part of its purpose is to see if this is the problem - does bulk recover some lost speed?

However, new load management is really close to being ready. I will try to release it this coming week or the week after. Much more widespread testing would be really helpful. I have released a snapshot, so you can update.sh testing (linux/mac) or update.cmd testing (windows). Testing it on the main network is enormously helpful. Expect errors in the logs; reporting a selection of these would be helpful, but I probably know about most of them; some of them include timings, which would be interesting, especially the worst case ones for bulk or realtime. Some log messages are just obviously broken - anything with "ArrayIndex", "Illegal", "NullPointer" etc in it, for instance - and these really need to be reported. Likewise, deadlocks, severe performance loss etc are things I need to know about. Please report performance on FMS or elsewhere. Don't turn off AIMDs for now, as that won't happen until new load management is fully deployed (and then initially testing isolated nodes).

Organising a parallel testing network via exchanging darknet connections would also be a very useful thing to do. So we can see what the effects are of a network composed entirely of new-load-management. Try to keep this in parallel to the main network rather than instead of it though: If you have the bandwidth, run nodes on both. Feel free to exchange friend references via FMS, Frost, IRC etc, purely for the testing network. I may reinstate the full blown testnet (a non-anonymous network set up specifically to help developers track down difficult bugs), soon, whether or not we deploy new load management.

So that's what you can do to help get new load management out of the door. Why would you want to? Well, we've been building up to it for the last 6 months, and much of the current problems of the network (although many useful features too) are closely related to it. It should significantly improve throughput, darknet performance, work better with realtime requests (although probably at a slightly higher latency), routing accuracy (and hence data persistence, at least once I implement the insert part), and generally should solve a lot of long standing problems. It will also be far less easy for a third party to attack it (the current load management model is somewhat fragile). We will still need a lot of debugging on new load management, a small amount of additional implementing work (e.g. inserts), and a lot of other debugging (e.g. realtime transfer failures). The last part may only be fixable with a full testnet. However, IMHO the longer we leave the network in limbo half way between old and new load management the worse things will be.

So, thanks folks, and please test the new version! If you're going to build from source you want the tag testing-build-1340-maybe-merge-new-load-management-pre1, on the branch merge-new-load-management, rev 47edfe611a1f088b51fccb36d3e7710e28c3d2b8.

Now I'm gonna shoot some beasties, and come back on Monday!

Having talked things over a lot, I'm much happier about Freenet: We have a good handle on many of the larger problems, we have enough time to deal with them, we have a limited set of plausible theories, we have ways to find out what the problem is, and if it did turn out to be an attack (which IMHO is unlikely), getting the engineering right will make it much harder (and ultimately darknet may make it dramatically harder - stay tuned for big progress on darknet usability after the current round of problems is sorted out; on security, the big question is whether we can fix the Pitch Black attack convincingly, Oskar has a solution, Evan doubts it but thinks there probably is a solution, snark may be getting involved; we'll see). I really do care about Freenet, and I'm not just doing things randomly, but I think we have a reasonable chance of going from well behind where it was a year ago to significantly faster, at least on some metrics, as well as being more secure and a better architecture.

Now I'm going to rest (involving sleep, gaming and perhaps the odd walk), see you all on Monday! Health is annoying but not serious, and getting sorted, at present. As for my religious-philosophical struggles, see below; I'm not always entirely happy one way or the other, and do need to find more social contact, and there are a great many things I am uncertain about, but that's where I'm at, at least intellectually.

Egypt is fascinating, lets hope it turns out the right way... I don't believe those who say if the Middle East has democracy it will all turn into radical islamists who will then attack Israel; I just don't believe suicide is all that popular. More importantly, the demonstrators are going to need to keep it up, and increase their numbers, although the government is clearly on the back foot at the moment... And all without telecommunications, but maybe that won't be a big problem given the intensity it's already reached. Communication tools are needed that can work when the government pulls the plug; whether Freenet, or something using some Freenet tech, has anything to contribute to that is a long-term question.

See you.

2011/01/22

Difficulties

Freenet has a serious problem with users leaving, apparently since roughly December. It is not immediately clear why people are leaving. The old uninstall survey on Google Spreadsheets never worked and we abandoned it, and we haven't got around to installing anything similar since.

My working hypothesis is it is related to poor performance. There has been talk of severe breakage of USK polling and possibly SSK issues for a while, although the problems with bootstrapping on Freetalk should be somewhat improved. More broadly, I've been trying to get the last critical features fixed before working towards a feature freeze and an alpha release, unfortunately some of this has been fairly disruptive. The important features:

  • New load management - This is now getting really close. The principle is we tell our peers how many requests we can accept, and then they only send us that many requests; and that we queue until we can send a request on to a node reasonably close to the target location. This should greatly improve performance - both throughput and data persistence (and also it will help on darknet) - but some of the changes made to make this work have caused largish problems, and both the largish changes themselves (e.g. turning off fair sharing between request types, which is why SSKs are routed a lot better than CHKs at the moment; more recently work on timeouts), and many of the related bugfixes have caused big bugs, or at least big network adjustments...
  • Freetalk - We are on the verge of a release candidate for the new version, which will be tested briefly before becoming semi-official. Both Freetalk and the underlying WoT have been under heavy refactoring (internal changes of all kinds) for some time. New jars will be deployed very soon (I need to finish reviewing the changes, and p0s needs to finish last second debugging). This resets the identities, boards and messages (by changing where the messages are fetched from / posted to), to avoid backward compatibility issues.
  • Darknet enhancements The first part of this to be deployed is the new per-friend trust level, which replaces the global friends trust level. On the foaf-connections branch we also have a per-friend visibility level: In future your friends can be visible to your other friends, if both you and them agree. This is strongly recommended as we will connect to them to help get connected to your friends, to improve performance via having more darknet connections, to get onto Freenet even when your direct friends are offline, and to make it easier to get onto Freenet in the first place (no more two way noderef exchange, you can send somebody an invite usually), plus to allow you to see your friends' friends' nicknames and connect to them as full friends if you know them (exchanging a password to prevent spoofing). From what I've seen so far this should not be very difficult to implement, and it should make darknet - and especially viral expansion of darknet - a lot easier, faster and more useful. You'll connect when ONE of your friends invites you, but immediately have many connections. You'll then have a look and discover you know 3 of his friends, exchange passwords over the phone, and get connected to them too. Then you can create an invite package of your own, including the installer and everything you need to get connected, which you give to a few of your other friends. This is all absolutely vital because opennet (the way in which almost everyone connects to freenet at present) is fundamentally insecure, as has been discussed at length on the email lists and IRC. The short version is on darknet you choose your friends; on opennet the bad guys choose you; it's not just that there's a small chance of connecting to a bad guy, it's worse than that, they can hunt you down gradually in many cases or just connect to everyone (which is disturbingly cheap given the relatively low bandwidth of most nodes).

So, apologies for the problems we've been having recently. Some of the poor performance has just been down to having so many updates in quick succession - I hope that we can reach a stable one-update-a-week pattern again very soon. But it's likely to be a bit rough for a while still as new load management is deployed. However, it will be worth it, new load management solves many problems we've been struggling with since the 0.4/0.5 era.

More broadly, development has been under a lot of pressure for a while. I originally agreed to a January 31st deadline for a feature complete alpha (I have since been told that such a thing is a first beta). I wasn't aware at the time that we would need the darknet enhancements, that Freetalk would take so long (although this has bought time for me since it's p0s's project), or that new load management would take so long (but we're definitely getting there). I am of the view that we can probably reach feature complete status by the end of February however - hopefully.

So I try to prioritise. And everyone knows I'm not very good at it. And sometimes this means things other people regard as important get postponed, marginalised, ignored or forgotten about. Or saying stupid things to valued volunteers. I will try to do better but please bear in mind that we need a release soon, and that may well mean your code can't be merged until after the end of the release process. We may need to branch if there are big, hard to review contributions coming in - but at the moment there aren't any.

We may have missed our ideal publicity window (arguably shortly after the wikileaks fun), but on the other hand a lot more people recognise now that censorship on the internet is a serious problem, and that the solutions that people have always assumed will deal with it are precarious at best. DNS can be forced off the internet. Google can be got at (not in the case of wikileaks but court orders have hit them in the past). Names can be demanded from twitter just as from anyone else (even if there's no practical censorship of such services just yet). Any site on a small hoster can be forced off the internet relatively cheaply via DDoS, and any site on a big hoster can be forced off via political interference (both of these happened to wikileaks). And modern, developed, industrialised, democratic nations are preparing to comprehensively block websites which contain copyrighted materials. Such powers will inevitably be misused to block places like wikileaks (which hosts commercial leaks, and the MPAA asked about blocking it under ACTA), or xenu.org (scientology claims copyright on the Operating Thetan documents, which accuse Jesus of being a paedophile amongst other things). Many such documents are legal under most countries' law because they are in the overwhelming public interest, in spite of being copyrighted. But what if you're not a big organisation with deep funds for fighting legal battles to high courts in many different countries? Freedom of speech online is hanging off a precipice, and more people recognise that now than have done for a long time. However, most nightmare scenarios which might kill Freenet, such as blocking all peer to peer communications, are unlikely to be anywhere near the political consciousness for many years.

So, on some levels, work is very depressing right now: Performance is well below what it has been, and users are leaving at an alarming rate. On other levels, we have a good handle on several of the fundamental long-term problems, and we have a golden opportunity in the public consciousness. Releasing immediately at any price is not a good idea, but nor is waiting too long. And we still have $14K in the bank, so we're not in an imminent funding crisis. So there's everything to play for, despite it looking pretty cloudy at the moment.

Home etc

We finally got the floors sorted out, so they no longer go up and down when you walk on them. This kicked up a lot of dust, but doesn't seem to have been catastrophic for my health (we had planned to be out of the way but that didn't happen). Another chest infection (probably pre-existent), another load of antibiotics, and finally getting my ears sorted out. Slowly coming to terms with the fact that I can no longer consider myself a Christian. And, going further back, the CCC demonstration for zero carbon britain by 2030 went remarkably well considering the abysmal weather (which cleared up on the day) and other reasons to be gloomy.

Apostasy

Skip this post-religion rant to the previous post which is Freenet-relevant

The following has been an increasingly big issue for something approaching 6 months now. The rather long text below mostly was already written but not posted - I haven't taken a lot of time out from Freenet to write it today! If you are a Christian and have a weak faith and mean to keep it, I suggest you look away; I don't bear any malice towards Christians, and respect many of them as great people. But I no longer consider myself, with honesty and intellectual integrity, to be one. Perhaps one day I may find that I can be again, but for now I don't see conservative (in the sense of respecting scripture) Christianity as viable, nor do I see liberal (in the opposite sense) as being viable, and I'm not convinced there is any viable middle ground either.

At this moment, I am, as far as I can tell, a weak atheist agnostic. I grew up in a Christian family and considered myself a Christian since a very early age, thanks in part to my great parents. I was never a straight down the line fundamentalist: I also have a scientific outlook, and have never been a creationist though mostly I have been a conservative. Mostly I was able to reconcile those two threads. One of the best reasons for being a Christian was dad: His extensive, if anecdotal experience of healings, answered prayer and so on, his scholarly approach (see his website, he is working on a commentary on 2 Corinthians), his consistently walking the walk, have been an inspiration, especially with his having a background in hard science and maths. Unfortunately his testimony is somewhat undermined by his belief in homoeopathy and telepathy, also on the basis of anecdotal experience: This is not a reason to not believe in Christianity but it does tend to suggest that his evaluation of anecdotal evidence and personal experience is suspect. Mum is more of a liberal, but nonetheless a firm believer. Both of them, and other strong believers I have talked to, have dramatic conversion experiences they can point to. These are great for them, but doubtful from anyone else's point of view.

I give below most of the issues that have resulted in me reaching this point:

  • Jesus takes the OT Law very seriously in Matthew 5v17-20. This is echoes in Luke 16 (which was written to gentiles, unlike Matthew's very jewish viewpoint). The good guys in the Old Testament generally take the Law very seriously - Psalm 119 being an extreme example. Paul doesn't teach the Law, and explicitly rejects the ceremonial law (notably in Galatians, supported by Hebrews [ which wasn't written by Paul ]), but nevertheless it clearly informs his moral teaching if you look at his vice lists and so on. As you'd expect from a former Pharisee.
  • I don't accept the Law. It clearly demonstrates that God is not just. While it is true that modern society is remarkably lax in some areas by biblical moral standards, it is also true that some things were allowed in the Old Testament that are to a modern audience horrifying, while other things were dealt with unrecognisably harshly.
  • In particular, there was the death penalty for homosexuality, adultery (when it infringes a man's marriage rights), breaking the sabbath, or cursing, attacking, or persistently disobeying one's parents. But rape, when it is not adultery, was lightly regulated (deuteronomy 22).
  • This is particularly hard as the two obvious explanations don't work: First, any argument about the cosmic significance of the sex act has to consider that 1) divorce was allowed (and Jesus' teachings come right after he re-emphasises the Law, and appear to be aimed at setting a higher standard for individual believers), 2) women could re-marry after divorce (that being the purpose of a certificate of divorce, see Deuteronomy 24) and 3) in cases of fornication, the father could choose to take the bride-price and refuse the marriage. Arguments that it was about economics are also very dubious (why allow divorce? and if the girl was betrothed it's death penalty) - unless you accept the worldly or feminist viewpoint which is unreasonably cynical from a Christian standpoint (men's land rights, women as property, moderated only slightly in the Law).
  • Furthermore, paedophilia is never directly mentioned - but it seems likely that some of the non-betrothed virgins raped and then married to the rapist were under-age (the age of marriage being 12 for girls and 13 for boys!); later rabbinical tradition, which is not authoritative for a Christian, allowed early betrothal (not specified in the Law but seems a reasonable assumption), recommended late marriage, required consent of the parties, provided for the bride price to be paid at divorce or death to the bride rather than at marriage to the father, and made the death penalty hardly ever used in practice (and much more humane than what was commanded), but also established an age of consent of 3 years and 1 day. Clearly later tradition is not always a bad thing - there are some clear improvements here. Jesus constantly conflicted with those who maintained some of it, but probably had a lot in common with them.
  • Nor is domestic abuse mentioned beyond the requirement to clothe, feed and have sex with all your wives, not just the newest prettiest one. Which is particularly bad given that 1) Women were effectively sold by their parents, 2) Women could be acquired by rape, 3) Women could be divorced at will (unless acquired by rape), and 4) Women could not initiate divorce. Arguing that domestic abuse didn't happen seems extremely implausible. Of course many of these issues are addressed in the New Testament - but it's not enough to say that the command to love god and neighbour sum up the Law, and that it is written on your hearts; when moral disputes, such as that ripping through the Anglican Communion, come up, it is necessary to find out exactly what God's standard is, and from everything we see both in the Old and New Testaments, the Torah is an authoritative expression of God's moral standards: Even if it is only authoritative for a specific people in a specific time, I still have a problem with a God who says that being a rebellious teenager, or cross-dressing, consensual homosexuality, or blasphemy or working on the sabbath (the group identity issues here are exactly the same as in islamic countries today, of course), is worse than rape!
  • Then we have fun passages like Numbers 31 (particularly v17-18). Arguably in context it's not so bad - when Israel invaded the promised land, they exterminated most of its occupants; here they are commanded - on God's authority - to forcibly marry the virgin girls, while exterminating the rest. Worse than their being wiped out? A matter of perspective I guess. Risky by the strategy of the ancient world - genocide was popular precisely because if you kill everyone nobody will grow up to avenge their parents. This was revenge for Moab somehow leading Israel into idolatry in Numbers 25, so analogous to the invasion (the usual theory is that Israel was acting as an instrument of divine vengeance on the Canaanites - God would have wiped them out with a plague or something otherwise). Note that I am not here attacking modern day Israel, or even Judaism, much of which even in Jesus' time was very moderate compared to the letter of the Torah. What I'm doing is attacking Torah literalism, which seems unavoidable if you take Jesus seriously. He goes way beyond the Torah, but he uses metaphors that make it absolutely clear that the minimum standard set down there stands ("not one iota...") - and by implication that it is God's moral standard. Since I have a problem with God's moral standard, I have a problem with Jesus. Nor can you use what Jesus says later (for instance on divorce) to avoid the problem.
  • It is impossible to prove to an unbeliever that homosexuality is bad - it is apparently an arbitrary edict, there are arguments but none of them are very convincing (in particular, disgust is not a reliable guide to anything, look at what people thought was disgusting 100 years ago).
  • There are also difficult passages on women e.g. 1 Tim 2:15. Sometimes passages are difficult (as that one), but sometimes they are clear, as with the Law, and they just say things that the reader doesn't like. One way to deal with this is to ignore them, but I don't see how I can be a Christian with intellectual integrity if I pick and choose which bits of scripture to respect. Matthew 5v32 seems harsh, but there is an argument in John Nolland's commentary that it is a mistranslation - that what it really means is that a man who dumps his wife commits adultery, and a man who marries a woman who has engineered a divorce for her own ends also commits adultery.
  • Paul says in Romans 1v18-21 that God is obvious from Creation, and that consequently God is just to condemn those who do not see him. The phrase generally translated "clearly seen" is "rationally perceived" in several commentaries; it means a rational thought process of some kind. This is simply not the case now. The modern case for "intelligent design" would likely be some way away from the argument Paul would make, and Paul says this truth is accessible to all humans, but even if we accept that different arguments reach the same end, much of Intelligent Design is barely improved young earth creationism, or is produced using the same filtering process beloved of extremist media and deniers of all kinds everywhere. There is a God of the Gaps argument - but the gaps keep changing, and see multiverse below. What you end up with is a philosophical argument, not one from observed reality.
  • And the penalty that God is supposedly justly imposing for not seeing what isn't visible? According to the evangelicals, infinite punishment for a finite being (and for finite crimes) - whatever that even means; is he going to make our minds expand slowly throughout eternity while he gradually does more and more horrible things to us unbelievers, so we can fully appreciate them? If not, you're gonna run into all possible permutations pretty quickly, rendering eternal damnation not much worse than large but finite damnation... Of course, hell is never spelled out in modern terms, although Jesus talks about it quite a lot...
  • Romans 5v12-21 makes the doctrine of the Fall very important: Sin entered the world through Adam, death entered the world through sin, all have died since then; later on new life enters through Jesus. God didn't create us as what we are today according to evangelical christianity - incapable of living righteously even if we wanted to, in the absence of divine intervention. In Lloyd-Jones' commentary on this he makes the case that if you don't believe Genesis 1-3 literally you have no framework for sin and therefore no gospel. However if humans evolved from (or were chosen from a species that evolved from) other animals, there must have been physical death at that point. Further, many destructive human behaviours have extensive precedent amongst animals - the bible even uses the metaphor of "the flesh". It is possible to square the circle by arguing that Paul is talking exclusively about human death, and that outside the garden of Eden nature was already harsh. There is some biblical support for this in 3v22 where the expulsion from paradise was partly to deny access to the Tree of Life - in other words, if Adam and Eve (the first two humans literally or figuratively) had remained faithful they would have been miraculously immortal.
  • There is little in the way of evidence. The three Christians I have talked to most about this all have personal testimony of some sort of subjective contact with God, and dad has, or believes he has, significant experience of healings and other such events; and having studied Paul for a long time, he takes seriously both Paul's testimony (similar to those mentioned) and his references to miracles and healings (as marks of an apostle i.e. his own work, and as ministries of others in the church). Individual subjective contact often comes in the form of one or another form of hallucination, and there are reasons to think that these may not be authentic.
  • However there are serious doubts about both contemporary and ancient miracles. On contemporary miracles, there have been studies done on prayer with conflicting and largely subjective outcomes. And one particular Christian faith healer was followed up: After a seemingly spectacular session, 23 people showed no improvement after 6 months, and one person who had unwisely thrown away their crutches collapsed the next day. In the bible Jesus was able to do miracles only when there was faith, and he emphasises this - but surely the purpose of miracles is to inspire faith by demonstrating Jesus' authority? It certainly isn't purely out of compassion, or why would everyone else continue to suffer? For instance the blind man born that way specifically so that Jesus could heal him (the Pharisees claiming that one of his parents must have sinned to cause God's wrath). We see in the case of this false healer that faith can be destructive. Of course the counterargument is this is one celebrity healer they followed up, and later events of her life demonstrate that she probably wasn't authentic, and anyway individual healings resulting from ordinary people praying are what it's really about (we have no apostles, but some people in the church have a gift of praying for people and seeing them healed). Well, the problem with that is the studies - but maybe God doesn't perform on demand - and arguments over spontaneous remission, confirmation bias and the likely incidence of coincidences make it far less convincing. The standard fallacy that if A happens after B then B must be caused by A is relevant here. In conclusion if I had seen what dad has seen I might believe the same as he does, but since it is second hand evidence I'm not convinced. It is also interesting to note that the places where there are the most reports of miracles are exactly the places where people are most credulous - this confirms the above re Jesus, but it is also compatible with the fact that people will tend to see things where they are looking for them, even if they are not there, and the conclusion of the meta-analysis on power of prayer that the effect if any is strongest in the most subjective measures.
  • With regards to historical accounts, Paul defines somebody as a Christian if they proclaim that Jesus is Lord and believe in their heart that he rose from the dead. Hence the traditional claim (e.g. by CS Lewis) that if the resurrection could be proven, then Christianity would follow. The problem is that history looks for the most likely explanation for the available sources - how credible you regard the explanation as being depends on how seriously you take the possibility of miracles, so it is a circular argument, it is unlikely that anything can be proven to the necessary degree of proof, even though by the standards of historical texts we have a lot more evidence in the bible than for many other events. Nonetheless it is probably worth reading some books about this, but there are always going to be criticisms, and as many skeptical scholars as believing ones: History is just far too subjective most of the time to resolve such a question.
  • To be more specific, Tom Wright's argument is summarised here and here. A plausible case can be made that the church believed in the resurrection very early on, based on 1) Paul's letters, especially 1 Cor 15, which can be dated very early, and 2) arguments about prior written sources from the gospels, which are quite late themselves. A plausible case can then be made from that to something remarkable happening to result in this belief, and to that something remarkable being credible resurrection appearances to many believers. On the other hand, appeals to martyrdom are irrelevant as most of the stories are apocryphal, and the evidence is not IMHO overwhelming, which it would need to be, especially given that as far as I am concerned all the consequences have fallen apart - I don't see how I can be a Christian.
  • The bible is in places grossly inconsistent: Compare Acts 9 to Galatians 1-2, for instance. It seems that Luke has constructed Paul's initial early meeting with the apostles, defended by Barnabas, whereas Paul swears to the Galatians that he had only met Peter after 3 years, and met the 12 more than a decade later. It may be possible to resolve these issues via hermeneutics - but it does make any doctrine of biblical authority very difficult. In fact we have now effectively disregarded two of the three synoptic gospels!
  • There are several science/technology issues that while not yet imminent problems may well become so very soon. These could be written off, postponed and hoped to go away - if the rest of the framework worked.
  • Current science seems to be rapidly converging on the idea of an infinite multiverse, in which every possible scenario would play out. If this is true, and can be shown satisfactorily, it would be a philosophical paradigm shift similar to Darwin or Einstein. Both had a following of deniers. Theologically it would mean that every individual decision and every seemingly impossible divine intervention happens in an infinite number of universes. It would make a mockery of free will, divine intervention and divine sovereignty. It would require one heaven and hell for each such universe. And so on. It would also eliminate any remaining arguments about the unlikeliness of intelligent life in the universe - finally removing the God of the Gaps that is the last vestige of the argument from creation to the creator that Paul alludes to. However, strictly speaking it would not undermine experience or evidence in this universe, assuming that the probabilities do work in the conventional manner - which they appear to since we are able to measure them. There has been some work on this recently, but in any case, this is far from the settled fact or convincing hypothesis with enormous predictive power and massive validation that relativity or evolution is. However, there is plenty of evidence for inflation - even if the mechanism is not properly understood - and the simplest theories all suggest a multiverse. More evidence will come from Planck and from the LHC within the next few years, although it could be decades before we have a complete theory.
  • There is a good chance that, if not within my lifetime then within the next few lifetimes, human lifespans will increase dramatically. They have already nearly tripled since classical Rome, which wasn't much before Christ (three score and 10, mentioned in scripture, was mostly an aspiration that some reached; whereas today it is a reasonable expectation). Some of that is due to infant mortality, but the figures for earlier periods suggest that that's not the bulk of it. Average life expectancy at birth was under 50 even in rich countries in 1900, yet is around 80 now. And we are beginning to get a handle on ageing: We know that progeria, which causes pretty much all the visible signs of ageing very early in life, is caused by a mutation in a single gene, and it is only a matter of time before we figure out all the pathways. We know caloric restriction extends lifespans significantly, at least in rodents, and we have drugs that mimic it. We know how to fix telomeres, although that seems now to be a relatively minor part of the ageing process. We know that those who reach 100 years old are largely genetically determined, and there are 3 genes that a lot of them have that protect against heart disease, diabetes and alzheimers. Right now gene therapy is far too dangerous to be used to patch the correct versions of the genes in, and can't remove the old versions, but drugs can be found, and in the long term perhaps we can replace genes, or engineer the next generation. There is also increasingly strong evidence that the brain is a biological system built from proteins etc, and can in principle be simulated. The Blue Brain project has for instance made enormous strides in modelling individual neurons right up to the basic processing unit of the mammal brain, the neocortical column - with everything being carefully checked against experimental results, and based closely on analysis of real tissue. A recent paper on synapses suggest there is more going on there, but it seems likely that digital immortality is possible at least in principle, and unlikely that Penrose's claim is correct - essentially that souls exist as a physical entity pulling the strings, which cannot be modelled even by a large quantum computer. And if the Blue Brain people are even fractionally right, it won't even take a quantum computer, and may be feasible this century - they talk about 10 years for a basic model which you could teach languages and get to do IQ tests for drug modelling, I doubt very much it will be that quick! The point is, the gospel is about death. Don't get me wrong, Christianity is not a death cult, Christians on the whole care very much about the world and the people in it. But even postponing death makes it harder to spread the gospel, and ultimately postponing it for as long as you want undermines a great deal of the bible.
  • Artificial intelligence / digital immortality / mind uploading would of course cause other problems: If well-constructed, it would be almost impossible to physically harm others, although there is plenty of opportunity for sin through gossip etc. The related self-modification and especially self-analysis would pose serious problems for some core evangelical/charismatic doctrines: When somebody believes, this is a miracle performed by the Holy Spirit breaking through in their heart, humans are totally depraved (i.e. unable to consistently do good) in the absence of the miraculous intervention of the Holy Spirit, the gift of tongues, the whole "being filled with the Spirit" thing. How does any of this continue to make sense when you can compare before and after, when you have a backup of your complete mental state from yesterday and you became a believer today? When you can run several copies with different random numbers and watch them reaching different conclusions, and when human nature is whatever you want it to be? And if righteousness is the result of the intervention of the Holy Spirit, why can't we see his fingerprints in the mind of somebody who has been thus modified? If the answer is that God does not perform on demand, does that mean that only those believers who turn off backups and thus risk their earthly immortality are accepted by God? And if the answer is simply that artificial intelligences are not human and therefore neither judged nor saved, you have the problem that they will undoubtedly be superior to real humans, sooner or later, and real humans will want to make the transition - and that in some cases they may even do this gradually, replacing a part of the brain at a time as it nears wearing out.
  • Resource usage of course touches on another issue: If the tech-optimistic view of the transhumanists (with whom I disagree on ideology and ethics), is wrong, then we run into the tech-pessimist problem: The increasingly urgent issue of climate change, which threatens a calamity (and gross injustice) not seen for an awfully long time. It is not clear whether there is a serious conflict here or not, but it's worth considering. There are books about the Christian approach to climate change, and it is unfair to say that Christians don't care, although some of the fundamentalists are obstructionist and derelicting their duty to their neighbour. However the bible is somewhat weak on this - seemingly the biggest threat for centuries, and caused by technological progress although it is now entirely a matter of sin as all the perpetrators (that's you and me, our corporations and governments) know full well what they are doing and it is technologically possible even now to take the issue far more seriously. Eschatology (what the bible has to say about the future) can be very unhelpful - some of it is apparently pro-creation, Paul talking about the creation yearning for the revealing of God's people, while other parts talk about the world being utterly destroyed. I have a paper on green eschatology (from Operation Noah), it is possible to interpret it in a plausible way. The principle of stewardship is present in some parts of the bible, there is a positive vision of the coming of the Kingdom of Heaven (supported perhaps by Isaiah 40-66) that the Bishop of Liverpool (who recently defected to the liberals) has often preached. And the Genesis account makes it clear that Adam was to work the garden - that is, not to strip mine it for short term gain, but to steward it for the long term, on behalf of its real owner. Really the basic reason to care about climate change is the Law of Love, the Golden Rule, the self-evident basis for most morality - that other people are as important as you are - which is entirely Christian. Many fundamentalists' conspiracy theories and loathing of all things science result in their denying the science, becoming part of the problem, voting for the Tea Party etc. Arguably modern realities may not allow creationism to be one of the "disputable matters" in the church, when a literalist view means writing off science, writing off climate change and contributing to one of the greatest injustices of the last few millennia. Similarly, the church in Rwanda failed to address big issues of sin in its society and, sadly, many of its members were involved in the genocide. Fortunately many Christians do take climate change seriously, especially as a justice issue. But apart from intellectual uncertainties there is a serious Problem of Pain issue here too - all the circumstances lately, despite all the prayer, have been going in the direction of too little being done (China does seem to be taking it somewhat seriously, considering their situation, but the US isn't going to be able to do anything until 2013, and without global agreement there will never be really strong national action; Cancun was far better than expected but expectations were very low), likely resulting in one of the most spectacular injustices of the last millennium - those who have caused it least will bear the brunt. So where is God? Much good came of the second world war, or the black death, but they were very unpleasant at the time, and a lot of the hurt fell to the weak ... Part of the worry this time, from a Christian point of view, is that many of the places that will be hit hardest - and are hit hardest right now - are exactly the places where the church is growing fastest. This is not a matter of divine judgement, it's good old fashioned lethal oppression of the weak by the strong for their own casual convenience.

Right now, the only reasons I can come up with for continuing to believe in God are bad reasons: Pascal's wager (you can always get the right answer by putting an infinity on one side of the inequality!), social aspects, making others happy about you (biblically "fear of man"), stubbornness (Puddleglum's defence in The Silver Chair!). And the intellectual framework just doesn't work, even ignoring the long-term sci/tech issues that admittedly may or may not become severe in my lifetime - but in principle are well within the horizon at least in theory (and in some cases are potentially career-relevant - is it immoral to work with computational neuroscientists?). Liberal Christianity does not seem to be a viable solution to me intellectually. It's true that not all liberals are resurrection deniers (and thus not Christians at all according to Paul), and it's true that the core of the gospel is not biblical inerrancy but the incarnation, death and resurrection of Christ. But to throw out the Law, even as an authoritative albeit compromised ("hardness of hearts", Matthew 19) exploration of God's moral values, brings me into conflict with Jesus himself, and probably also with Paul: Both of them valued it very highly. Throwing out Matthew might be an option, but then exactly where does one drop one's anchor when one is ignoring substantial parts of scripture whenever they conflict with one's worldly biases? Clearly the worldly biases are authoritative and not scripture! I would accept the intellectual cost of a liberal outlook if I had any real credible evidence, but right now I don't see any - other people's highly subjective and anecdotal experiences, and unresolvable debates over things that happened millennia ago, seem to be the best that is available. It might be worth reading up a bit more on the latter eventually...

If God wants me he knows where to find me. My grandad was a militant atheist up to his remarkable deathbed conversion; I am, I hope, open to answers and experience - certainly more open than he was! I'm not sure what exactly would convince me, but if God exists I'm sure he's far more creative than I am! Some of the weakest arguments for atheism are based on the principle that God is a giant - big and stupid. If there were a creator it is absurd to assume that he must have all our limitations (as the Greeks did), and therefore only care about big stuff. And if I was saved in the first place, he will probably reach me again one way or another. But for now, I will get on with my life, which is pretty full even without reference to God. And find other friends and places...

I do not at present regard humanism, transhumanism or any other life philosophy as necessary. As I mentioned, I believe (along with pretty much all the world's religions) that the Golden Rule is self-evident, if hard to apply in detail, and harder to stick to. Also, scientific skepticism has been a part of my life for a long time, if not a strong part (I'm no scientist, I'm an engineer or just a hacker - but I generally respect empiricism). I specifically reject transhumanism, because why should I fear death if it is no more than the end? Better to end than to become an agent of darkness - Darth Vader is far more to be pitied than Obi-Wan Kenobe - and no great loss if you have contributed what you can. Of course when faced with it doubts strike at everything you thought you'd achieved, and there is no doubt hideous pain... I also reject humanism, as while I recognise much that is bad resulting from faith, I am not (yet?) openly hostile - much good has also come from Christianity, and the same mixed picture is true of many worldviews (perhaps not on the same scale). On the other hand I am certainly hostile enough that I would be a disruptive influence in church... social contact is a problem having left.

On some levels, it feels good to have closure, what is hard is not having a decision one way or another. Emotions are not a reliable guide to anything - emotions follow the decision, and solidify it, and if you make them your goal you drive yourself mad. This may or may not be a scriptural principle but it is also scientific - those with no emotions are incapable of making any decisions.

2010/12/07

Wikileaks, a publicity opportunity, and scarcity

Wikileaks is in big trouble. Julian Assange has been arrested, him and the organisation have lost $100,000, and the main website is down due to government pressure. Some of the others may go down soon too, and governments are of course busily (but slowly) granting themselves the power to block sites. They promise that this will be a judicial process but IMHO that's just not how the judicial system works in the US or UK - IMHO we will be seeing notice-and-takedown which you can dispute if you have deep pockets, just like with the DMCA.

People are talking about distributed DNS as a possible solution (which of course it isn't, IP-level and proxy-level blocking is both feasible and actually already implemented in most countries). IMHO it will suffer from the same problems that plague similar systems everywhere else: There is no good way to prevent spam. CAPTCHAs can be solved, by humans, for something like $1 for 1000, and there are companies that offer this service. Hashcash is vastly more expensive for a real user with a slow computer than for an attacker with a fast system and GPUs. And anything that handles real money will be centralised and vulnerable. Which leaves trust based systems, such as CAcert.

Of course these issues also affect Freenet - particularly Freetalk. In the long term we will need to get rid of CAPTCHAs. IMHO the best realistic solution is the Scarce KSKs proposal, which involved allowing every darknet link to send a single scarce token every X period. Of course, this requires pure darknet. But as you can see from recent discussions on the devl list, opennet is IMHO impossible to really secure, short of bolting on a third party onion network (which is also difficult to secure, but not really our problem).

Anyway, you might want to visit the Reddit post.

2010/12/02

Isn't it odd that America, northern Europe and northern Russia were the only places it was colder than usual during Copenhagen?

Of warming, cooling and divine sovereignty

Apparently I'm not the first person to come up with the theory that God is determined to prevent a climate deal happening any time soon. Monbiot did miss a link though: 2010 had a lot of hurricanes, yet very few of them hit the USA. And arguably this is the tip of the iceberg: Not only is it colder in the West (especially in winter) despite being warmer almost everywhere else, all the political circumstances are pointing in the wrong direction too. It's true that China may be starting to take the issue seriously (there is a good case to be made that China was set up as the bad guy at Copenhagen), but without the US the situation internationally is pretty hopeless until some time in 2013 - and that's if Obama is reelected and has a legislature that isn't completely riddled with climate change deniers. One of the most fun things about this is that a lot of the climate deniers are also evolution deniers - and which way does cause and effect run? You train people to treat scientists as immoral atheists part of a great international conspiracy to suppress the truth, what do you think happens?

A scientific explanation may be available: This year, the global average temperature has been somewhere between the hottest on record and the third hottest on record, there have been extraordinary events involving record high temperatures in Russia and Pakistan, yet the West is remarkably cool (Africa is hammered as usual, of course). This is believed to the the result of a major shift in the jet stream, the global wind currents. For a while this was thought to be the result of low solar output, but here and here suggest that maybe the rapidly shrinking arctic ice cover is to blame - local cooling in one place, while major heating in others, resulting from global warming (more properly termed "climate change"!). Unfortunately this coincided with the last best chance to get a US climate law on the books before COP17 let alone COP16 - and the rise of deficit politics in the US, and the Tea Party scum.

What can I conclude? Well...

  • People (many people) are self-centered and only see what is in front of them. They pretend that they need cars, foreign holidays, and all the blessings of the modern world (some of which are good) to "survive", while their actions contribute, often directly, to the rather more literal non-survival of the less fortunate (one of many such crises).
  • Even when they are hit with obvious impacts (e.g. Katrina), a good propaganda campaign which confirms what people want to believe can quickly eliminate any threat to their worldview.
  • Be careful when you claim divine sovereignty as the basis for any event! Coincidences happen, as a matter of mathematics, and sometimes they are utterly horrible. As are human actions a lot of the time.
  • The strong continue to lethally oppress the weak for their own short-term convenience. Climate change is not, sadly, divine justice, any more than 9/11 was. What it is is human injustice and short-sightedness.
  • Creationism is not a harmless difference of interpretation, or a minor, disputable heresy. The conspiracy theories required to maintain it result in a dangerous alienation from science as a whole, which has very serious consequences as it applies elsewhere.
  • If I have to skip the demo on Saturday I will be rather disappointed. While nothing will happen without America, losing the momentum in the rest of the world while we wait will result in an even weaker outcome when they eventually do wake up. An opportunity for building local alliances already met a similar fate.
  • Oh and if you're convinced nothing has happened in the science this year apart from a few largely bogus controversies, have a look at this. The preciptation map is particularly fun.

2010/11/12

Scarborough Big Climate Connection

Nine of us went to see Robert Goodwill MP about climate change. That being me, my mum Evelyn (a concerned citizen), Jane (an enthusiastic local climate activist and prop maker, formerly the driving force behind Scarborough 10:10, a campaign to get people to sign the 10:10 pledge and raising awareness by showing The Age of Stupid with question and answer sessions), with Paul and Chow from 10:10, Mark and somebody with a name starting with a G from Greenpeace (I'm not good at names and faces!), Andy from the emissions partnership, and Kevin from the Green Party and private tenants rights group. Most of these people I hadn't met before, and it was great to make contact; many of them are deeply involved in local sustainability, some of them wearing at least 4 different hats: 8 organisations represented by 9 people!

It went really well. Goodwill is a front-bencher, assistant government whip and formerly shadow roads minister, and a farmer, and has a remarkably positive voting record on climate (a lot of tory backbenchers are climate skeptics unfortunately). We discussed the main demands: Energy efficiency in housing, including the Green Deal (aka Green Mortgages, a petition originally proposed this many years back, Labour rejected it only to introduce it in their election campaign) and particularly a minimum standard for rented property. It was pointed out that most tenants are on short leases and the government's limited proposal to let tenants demand reasonable energy improvements is insufficient: we need a specific standard, that all rented properties must be band E or better by 2016. There should be help with finance for smaller landlords, possibly via some version of the Green Deal (Goodwill and Paul had experience as landlords), but while many have difficulties and demand is only going to increase, most of the fuel-poor live in rented property, and this needs to be dealt with.

We briefly discussed the proposed Emissions Performance Standard - a legal maximum grams of CO2 per MWh of electricity generated for all newly built power stations. This was a manifesto pledge but has been delayed; we are asking for enabling powers to be passed so it can be set quickly in secondary legislation once the consultations are finished and the details worked out. Goodwill told us that since this is not in the bill it's unlikely to be introduced in the first reading but it might happen in the second. We then discussed recycling and waste incineration, a topic that was raised by somebody and which Robert had some experience with while an MEP.

Running low on time, having been sidetracked a few times (which is probably a good thing, I don't want to be doing all the talking!), we briefly mentioned the international negotiations - as I understand it, while it is highly unlikely that there will be a complete, binding deal, there is a good chance of some progress on adaptation funding. Our demand in this area is, in jargon, that the government support a single UN fund for adaptation, mitigation, and forestry, funded by internationally agreed taxes on shipping, aviation and banking. In English, this means that the rich countries need to pay into a pot of money controlled by both rich and poor countries, under the UN (not the World Bank!), which will be used to deal with the enormous impact of climate change on currently poor countries (for instance, some parts of Africa seem to have climate related food crises every few years - this was not the case 30 years ago, talk to any NGO active in the area), by for instance building flood defences, planting drought resistant crops, and so on; to help them to develop cleanly (by building wind farms instead of coal plants, for instance), and to help them to preserve their forests (which are a massive store of carbon, as well as providing many other important global benefits, which we largely get for free). We created this mess, and the global poor will be hit hardest both because of geography and because they have the least capacity to cope: It's a matter of justice, as well as being absolutely essential for any binding deal to eventually be signed when America comes to its senses. Goodwill said he'd write to Chris Huhne and get back to me, so we'll see. He seems very positive, he took us seriously, and everything I have seen suggests he is very capable. He's always answered my emails, and if I ever voted for a Tory it'd be him. It is great to have the government on roughly the same page on so many of the green issues, even if they do do some really stupid things like the planned forestry sell-off; we are largely concerned with details in the main demand above. Something similar to the picture will be in the Scarborough Evening News next week hopefully.

The banner, chopped off in the photo, originally said "Scarborough Says: People and Planet Before Profit". We originally had decided not to use it but ended up doing so anyway. It provoked a useful discussion, both in the pre-meeting in the cafe nearby beforehand and with Goodwill. The short-term, front message is that right now, action on climate change is a win on all levels: It will create jobs, expand industries, improve public health, and have a great many benefits. It will help to put our economy on a firm footing now that the fountain of youth that is the financial services sector is looking frankly shaky, and it should help to get the Northern economy going, with some of the ports upgrade investment likely to be spent in Whitby; this money will be used to break the engineering bottleneck which means we were unlikely to build enough offshore wind to meet our 2020 renewables target.

As an aside that is definitely not part of what we discussed with Robert, I am skeptical that exponential economic growth can continue forever: It is possible to reduce the carbon emissions for every unit of growth, but as long as we want more physical things, and recycling is imperfect, it will be very difficult to shrink carbon intensity fast enough to account for both growth and the radical cuts needed real soon now. On the other hand, who knows what the future may bring; technology has a habit of surprising us. Unfortunately waiting for better technology is not acceptable at this stage as we are already far into the red with nature and on the verge of sheer catastrophe - we need to deploy what we have, because the time has run out.

The traditional answer to this is the service sector and the knowledge based economy. The latter is usually accompanied by ever-increasing demands for intellectual property laws bordering on protofascism and often on innovation-impeding and growth-impeding corporatism, as we've seen in both copyrights and patents in recent years (most recently Cameron and Google on UK copyright law). Given that I work on censorship resistant peer to peer software for a living, clearly I don't support this sort of nonsense - but there are many ways to run a "knowledge based economy", as we've seen.

To get back to the issue at hand, we did talk about nuclear briefly - the policy is there will be no subsidy, the only way the economics makes sense is if there is a strong carbon price, but as our MP pointed out, if it is too strong, it might result in offshoring in heavy industry; but you have to consider that that sort of argument comes out of the steel industry now, when according to Sandbag the current scheme, the EU ETS, actually contributes significant profit to most heavy industries (effectively subsidising them out of consumers' electricity bills).

So all in all, a successful meeting. Will be interesting to see if we get any press coverage. I'm off to Leeds tomorrow with Jane for a mini-conference on building the climate movement, although I'm skeptical that I'll have time to do much in the near future...

I will post a Freenet-related update next time, I promise! I may also elaborate on the doubts issue I mentioned last time, which is partly resolved. Those who came for the climate stuff use the direct link. I will be migrating this to a proper blog in a while, with comments (sorry for the lack of comment support, feel free to email me, or reply on your own blog and send me a link), although whether that is Freenet-only remains to be seen; this site will remain up in any case.

2010/11/01

The world gets what America and China deserve...

I have, perhaps foolishly, volunteered myself to organise a meeting with our local MP as part of The Big Climate Connection. Hopefully that will go well. We are asking for more, additional, UN-controlled adaptation funding (probably from taxes on shipping, aviation, banking and carbon auctions ideally with a minimum price), for domestic energy efficiency (especially for rented homes; the Green Deal should ensure owned homes are upgraded), and probably planning issues too (it can take nearly as long to get approval for a small onshore windfarm as for a nuclear reactor; offshore is vastly more expensive and has engineering capacity issues, in the short to medium term wind is a vital part of the future, and as much as possible of it IMHO; and in North Yorkshire there is virtually none). Meanwhile, America votes on Tuesday. Until America passes meaningful climate legislation there is little hope of progress internationally - and without a global deal, there will always be countries who just go and burn the last drop of oil and the last gram of coal.

From a climate point of view, it is very straightforward: Every single Republican Senate candidate either denies climate change exists or opposes any effective action to deal with the problem. Even with 59 seats (including the two independants), the Democrats weren't able to get a hideous compromise of a carbon trading law through the senate - partly because they need 60 to beat the inevitable filibuster, and partly because of coal state democrats who vote against it. The predictions look like they will lose control of the House and lose many seats in the Senate, making it even more hopeless... As an aside, just because the proposed bill was a hopeless compromise does not mean it was necessarily of no value; without a bill to limit the total emissions across the US, there will always be leakage. Internationally, this is also true - domestic legislation in the US enables a global deal, China won't move without the US moving first. Carbon trading, offsets, free handouts of carbon permits, and the maximum price would all weaken it to the point where it won't achieve anything like what's needed in actually reducing emissions, at least in the short term - this is pretty much the case in the EU already (we don't have a maximum price, but then we don't have a minimum price either, which is actually a gain for the US proposal) - but just having a target would likely unlock the negotiations.

Another issue arguably at stake is Obama's plans to require all peer to peer systems, web services etc to comply with intercept warrants - even if this requires major technological redesigns. This would probably affect Freenet, were it passed. It is the latest in a long line of unconstitutional attempts to make strong encryption illegal, going back decades; there is a good chance it will not happen, and even if it does happen it will be struck down sooner or later. However, given the fact that this time there is money at stake (eventually the copyright mafia will need to deal with secure peer to peer) and not merely national security (terrorists could chat over Facebook/Second Life/Freemail/Skype), it might pass - and if it's struck down they can always pass it again, like they do with some other laws... The rhetoric from the Tea Party folks is often about defending the constitution, it will be interesting to see whether that principle survives the combination of corporate lobbying and national security...

Just as with the climate law, if this passes it will have consequences beyond the US. It would have severe consequences for me personally, for instance, if my largest client, Freenet Project Inc, was put out of business. Much software is developed in the US, and while many horrible things are in progress in Europe, these things are harder to fight if they are already established elsewhere.

Which of these issues is more important? The cynic in me says they're both hopeless. The optimist says on climate we will find miraculous technology (e.g. the claims that solar PV will be competitive with coal soon, breakthroughs in battery technology etc) that we can deploy instantly at no cost and will solve the climate crisis without any major government intervention (having already blown the opportunity to create millions of jobs, put the economy on a more sustainable footing, improve public health, greatly reduce risk, and encourage technological development by using what we already have), or more realistically, that the peak oil crisis due real soon now will have such a devastating effect that governments will finally get their act together (and will miraculously choose not to go for the easy short-term option, coal) - although whether it will be possible to deal with the problem in 2020 having made minimal progress before that, is an open question. And on encryption, the optimist says it won't pass regardless - none of the previous attempts did. And even if it does it will be struck down, and FPI will find a way to perpetuate itself - until it becomes impossible for me to continue here, where we have a much weaker constitutional defence; efforts to "modernise" wiretapping are happening here too, and we already have (at least the primary legislation for, if not actually implemented) what amounts to a three strikes law, and a blacklist of copyright infringing sites...

So am I, as a Brit, saying you should all vote democrat? Personally I would, in this specific election, if I lived in the US. Fortunately I don't: I'm not convinced I'd ever move to the US even if I could, for many good reasons. My point is that what you do affects everyone else in a way that is almost unique to the USA. Think before you vote. There's an old saying that democracy means the people get what they deserve. Unfortunately here it looks like the world gets what the US and China deserve (not that China is a democracy by any stretch), at least on climate. Such is the nature of empire, and we're some way from being rid of the US empire, despite all the problems recently...

Freenet update

As explained in the last update, which didn't get inserted to Freenet (sorry folks), but is visible below, I am gradually merging the new load management system into the main fred source code. This means, roughly speaking, merging a big chunk of code, doing lots of bugfixes, letting it run for a week while fixing minor bugs and catching up with email, and once it's mandatory, repeating the whole process. Arguably we shouldn't be adding big new features at this point in the release cycle, but IMHO it is justified for load management and for zidel's new packet format. It is already well under way, although far from finished (I need to deploy most of it to debug the rest properly); many of the intermediary stages have large side-benefits, such as the recent work on block transfer and fairness between peers; it is essential to prevent relatively easy DoS attacks on the network that have been possible ever since 0.7 but remain serious and will sooner or later be exploited (although fair sharing between peers, merged in 1296, greatly increases the cost); and it is a fundamental issue which Freenet has struggled with more or less forever, yet thanks to the effort of many people, particularly evanbd and some folk on Frost some years ago, and the Content Centric Networking paper, we finally have a good idea what we're doing. Further, there is a good chance that this will significantly improve both data retention and performance in general; IMHO the first is probably more important.

The next step will be to deploy the bulk vs realtime flag, although it may make sense to do some work on the request layer first - the bulk flag will significantly increase the number of requests in flight, so it may be necessary to make RequestSender fully asynchronous first (block transfers are already asynchronous, and the async version of the code is, after some refactoring, relatively simple and fits well with the rest of new-load-management). After that, most of it is ready; turning on the load status messages, various changes relating to timeouts and counting how many requests are on the next node, and finishing off the core changes. Once we have new load management, zidel's work will be deployed as well; it would be good to deploy it immediately but it's probably best for the network to only have one huge disruptive change at once.

Other important stuff remaining include:

  • major optimisations to datastore disk I/O, which I might get around to soon
  • sorting out node.db4o encryption and auto-backups (possibly combined with a further reduction in disk I/O related to the client layer)
  • various merges, including the new content filters from sajack
  • minor security related changes to auto-update
  • fixing the Library search system
  • many minor user interface and usability improvements
  • deciding whether to try to get Freemail working and officially supported
  • and of course, endless debugging; there will be plenty of debugging before 0.8.0-alpha1, but most of it will happen afterwards; a short term issue is the difficulties with downloads

Critical stuff that I'm only peripherally involved in includes Freetalk/WoT, which hopefully will reach some sort of release ready status in the coming weeks, enabling us to ship FlogHelper as well (and making life easier for testers of e.g. Sone), and getting the new wininstaller deployed, which afaics only requires a new update script now. See the roadmap on the (new) wiki.

We are getting really close to 0.8.0, and we need the publicity; and there are quite a few things going on in the world that might help us, or we might help. Which reminds me, if you can translate Freenet into a new language, or update an existing translation, please contact us! It would be really awesome to have an up to date mandarin translation, or even Farsi (I am prepared to add dir=rtl's etc where needed), although the lack of broadband in most of Iran makes Freenet of only limited usefulness. There are also many European languages which various people have started translations for but are not maintained any more; the more the better!

More broadly, Freenet is working a lot better than it has been (especially in terms of working fast out of the box), and we are very near to acceptably feature complete for 0.8.0; we don't want to keep adding features indefinitely. There are big clouds on the horizon legislatively, and there is the unfortunate fact that opennet is hideously insecure; but we have a real opportunity here, and IMHO if we can get Freenet big, we can bootstrap a darknet. There is a need for further work on security even on darknet (e.g. tunnels), but if we solve some of the fundamental problems with performance and functionality, and get a lot more users, we can make some serious progress, and maybe even persist without having to constantly rely on Google's generosity!

Other things

My health is reasonable; it's not great, but it's not bad. I'm not sure I'm entirely rid of the chest infection, and my other issues aren't entirely resolved, but are probably purely a matter of diet at this point. Finances aren't great but FPI (in the form of Ian) have generously agreed to another pay rise, and it should all be sorted some time next year... Servalan (my server/workstation/myth backend/everything-box) is still managing to struggle with 8GB of RAM, apparently a big part of the reason why is my habit of having 100+ firefox tabs open, I'm playing with memory compression which may help; and of course closing some of those tabs! On the other hand I've had some major faith issues lately, which I'm at something of a stalemate with right now; but if God wants me to remain in his Kingdom with peace and clarity of thought he will eventually reveal a way forward.

2010/08/31

Load management and other things

The current network continues to have a lot of problems with backoff. Some people report it's improving but it's taking a long time. I've managed to convince Ian that rewriting load management makes sense at this point. As I explained in the previous post, this is something we've been pondering for a long time, and somebody with a better grasp of network issues might have come up with it earlier. Anyway, we have a fairly good idea of what needs to be done, and I've made a start on implementing it...

Basically the principle is 1) each node figures out what its capacity is (based on how long it will take to transfer everything if every request succeeds), 2) we never route to the wrong node unless the right node is severely broken, and 3) we queue, and to some degree retry, until the request is routed to the right node. Hence we can have accurate routing, load management that doesn't care whether requests are local or remote and so leaks no information, and that runs as fast as possible without causing major misrouting.

One cost of this is relatively high latency on each hop. It can take many seconds to get a slot on the next node, because we have limited capacity, largish blocks, and many peers. Reducing the number of peers is unlikely to help much either, and reducing the size of the blocks will increase other costs considerably. The solution is to divide requests into either bulk mode or realtime mode. The latter will be used for fproxy, the former for long term downloads. Fproxy requests have higher transfer priority but lower capacity, and are expected to be very bursty, enabling us to have relatively low latency. Bulk requests should be constant, high throughput, high latency; realtime requests are bursty, low throughput, low latency. I'm not sure who first suggested this, maybe Thelema. Anyway, it appears that the new load management scheme won't work without it. An interesting consequence of this is that we can run more bulk requests at once, improving throughput and security, but we will need threadless block transfers to make this work. I have another branch with this but it needs updating and debugging.

There are also some security issues. We need to give our peers enough information to predict whether we will reject their requests based on bandwidth liability. Currently the information used internally is rather too high resolution to be given out to peers. IMHO reducing it to just the number of transfers in each direction won't impact performance much (there is plenty of vagueness elsewhere), but will make it safe to share this. Note that the current load management system has much bigger security issues: It is much more vulnerable to DoS (although fair sharing between peers fixes this), and the originator rate control (AIMD) thing is horrible.

In other news, the new windows installer seems to be working well, although it is not yet finished, and Freetalk continues to make promising progress. Hopefully both will be ready soon; you've heard that before, of course. Help with either would be appreciated!

Another interesting if scary development is some back of the envelope calculations done recently on what it would cost to connect to all opennet nodes, in order to e.g. do correlation attacks. I'm not quoting numbers but lets say they are low enough, even on a large network, to make the long-term viability of opennet rather doubtful. They may even explain why we haven't seen any serious DoS attacks - the bad guys find it more effective to watch what everyone is doing than try to prevent them from doing it. The long term solution has always been darknet (assuming oskar solves the pitch black attack), and the chicken and egg problem has always been that because Freenet is small, slow and has a relatively high proportion of paedophiles etc, nobody can convince their friends to use it. Any input welcomed, this and other things hopefully will be discussed at an IRC meeting sometime in the near future that nextgens has convinced me to organise. Another big issue is 0.8 timing. We have funding for some time now, it would be great to get 0.8 out, on the other hand you can always add more features - and some of them are really important, e.g. load management changes, data persistence, filesharing searching, ...

I am on holiday in Wales from Saturday to Thursday, and only to be contacted in case of emergency. I'm not going to merge the load management code until I get back! Health wise, I'm fairly sure I know what the problem is, but I won't know for sure until some time after I get back. Mum and Dad have gone up already via Bristol because of a friend's PhD.

2010/08/19

Chaos good and bad

Much has happened since last time I posted. We are now up to build 1271. The metadata changes in 1255 needed many more builds for bugfixes, but should now be more or less stable; see the previous post for details on why they were worthwhile. There were many more optimisations and bugfixes in the period up to ~ 1266, including:

  • a good deal of code related to detecting data corruption
  • better handling of bad MIME types when filtering is enabled
  • an important bugfix for FEC which caused slowness in the past but with cross-segment caused data corruption
  • various blob file leak fixes
  • more FEC bugs (the same segment getting decoded more than once etc causing problems)
  • site insert fixes
  • load limiting tweaks (particularly on slow nodes or just after we've had something disruptive happen e.g. a mandatory build)
  • new Library which now searches both the old and new wanna indexes
  • yet more client layer fixes (some directed at preventing fetches from stalling and failing them instead)
  • logger fixes
  • fix splitfile bloom filters (different to datastore bloom filters) and move them into the database (rather than plaintext temp files)
  • don't stripe FEC decodes in the normal case (reducing the number of seeks needed)
  • store the list of splitfile keys more efficiently (again reducing disk I/O and database size)
  • more and better buffering (reduce disk I/O)
  • bookmark fixes
  • even more client layer fixes, including what I hoped was a fix for the stalling at 100% bug

By this point it was clear that the main problem people were complaining about was downloads stalling, and that fixing this would be nontrivial; I chased up various leads and got nowhere. It also became clear that the problem was probably in the request queue / scheduling / cooldown logic. What happens here was we have a tree of requests which need to be run (we have a separate structure for which requests want which key, which is what the splitfile bloom filters are for, due to the need to recognise when a key is offered by a node or is wanted by two separate downloads). This tree starts with priority, and then retry count, and then random between clients (e.g. the global downloads queue is only one client; Frost is another), and then random between high level requests (e.g. a queued download), and then the individual low level requests (e.g. a single splitfile segment), each of which might have more than one block to fetch and so may start more than one network-level request. There was also a separate structure, the "cooldown queue": Persistent downloads are generally queued with max retries = -1 i.e. we will keep on requesting failing blocks forever, but we do not want the network to be flooded with requests which are unlikely to succeed. So we impose a limit: A single key may only be requested 3 times every half hour. Hence after that, we unregistered the request, and put it on the cooldown queue. The problem with this was that even if we were not making any progress, the database (in which all this is stored) was constantly changing, resulting in random writes / disk seeks, fragmentation of the node.db4o[.crypt] file, slowdowns to the whole system, code complexity and so on. So the change made initially in 1267 was:

  • Remove retry count from scheduling: The tree is now priority -> random between clients -> random between high level requests -> random between low level requests. This also means we will (when I get around to it) be able to change priorities really fast.
  • Keep the cooldown wakeup time for each request, and cache it all the way up the tree: Hence the database does not change at all (for requests with max retries = -1 i.e. forever), unless we are making progress. The memory cost is negligible when you consider that we already have the bloom filters for each segment in memory anyway, and the reduction in database access should significantly improve CPU usage as well as disk I/O (IMHO random disk writes are generally Freenet's main system level performance impact on most systems).

This should radically reduce the amount of disk I/O needed, simplify the code, circumvent any bugs in the code responsible for going into cooldown, coming out of cooldown etc, avoid fragmentation, reduce opportunities for database corruption, and generally improve performance, and hopefully in the medium term stability too.

Of course, this did involve quite large changes, and as a result, there were many bugs, which needed to be fixed over the next few builds. One of the biggest issues was a change to when we inserted healing blocks. When we download a splitfile, each segment (chunk of 4MB) is decoded, and the original 128 data blocks plus 128 check blocks (for redundancy) are recovered. Those blocks that we were unable to fetch and had to decode are "healed", that is reinserted. However, not having accurate information on how many times we tried to fetch each block, this was a bit tricky with the new code. Initially I made it just heal everything that had to be decoded, even if we had not tried to fetch it. This resulted in a lot more inserts, which caused the network to go a bit mad. Load management takes time to adjust to a big change such as a lot more inserts, and I released a new build (two new builds, doh) to fix the underlying problem; I am fairly sure that the major problem at the moment is still backoff caused by load management still being in the process of readjusting. Sorry folks...

If it doesn't settle in a week or so there may be a deeper problem, but I doubt it. It does seem to be impacting overall performance - the success rates by HTL has dropped somewhat for example. I would appreciate any input people can give, anyway. I am chatting on FMS and Freetalk lately, and also have read only access to the freenet board on Frost.

It is of course tempting to rewrite the whole load management system. Most of the changes we've made recently are things that have been "cooking" for quite a long time (years in some cases) but we've only put all the pieces together and been confident about what needs to happen relatively recently. I think we are at that point with load management.

Our current load management scheme has three components: First, the node estimates its capacity for requests, and rejects those it can't deal with. Second, if a request is rejected or if it times out or fails in transfer, the node routing to it "backs off" that node - it goes into the backed off state for a short period, and if it keeps on causing problems that period increases exponentially up to a limit; while a node is backed off we only route to it as a last resort. Third, each node maintains a set of additive-increase-multiplicative-decrease numbers, which function as a window estimating the number of requests that can run on the network at any given time. We use these to control the rate at which we originate new, local requests. This is intended to be similar to what happens on TCP/IP over Ethernet, but IMHO it's not a good match: We can make it react more quickly or more slowly, but it is never going to be able to deal with local network conditions (e.g. small numbers of fast darknet peers) efficiently, and makes tuning fairness between request types and peers tricky; but above all, it can cause a great deal of misrouting. This affects data persistence: A leading theory as to why data persistence is so poor is that inserts are rejected by the nodes that would have been the ideal locations. If you insert a block 3 times it greatly increases its chances of success - even though it should in theory go down exactly the same set of nodes! Our current load management system, in the worst case, degenerates to routing load to the least overloaded node, just as 0.5's did; this is however a temporary state while the AIMD's adapt, but it is built on a significant amount of misrouting and randomly rejecting perfectly good requests. Also, it is far from ideal in terms of security, as the originator runs the AIMD algorithm to decide how fast it can start its own requests.

Also, it is unnecessary. Discussions years ago on Frost, and more recently, the CCN paper (which is well worth a read) have shown that this sort of load management by guessing rates is unnecessary on a request/response system; what we need to do is send requests as long as we have available capacity to do so. A new load management scheme is proposed on this basis:

  • Each node computes its own capacity, divides it up among its peers, allows them a certain slice guaranteed and beyond that may reject requests (without preventing them from being retried)
  • The node tells its peers what their slice is and what the overall limit is
  • We don't generally backoff on a rejection unless there is a serious problem, so backed off peers are even more of a last resort - backed off peers over some threshold time can be safely assumed to be temporarily broken, and there should never be lots of backed off peers unless we are broken
  • We intentionally and explicitly limit misrouting: Severely backed off peers or peers with very low capacity don't count, but generally we should only consider those peers that are reasonably close to where we want to route the request to - maybe the top half or quarter of the routing table.
  • If we cannot immediately route to a good enough node, we wait, using a strict FIFO policy. This means that the request is not freed up, so the downstream nodes will not make more requests. This is much preferable to the alternatives - either unrestrained misrouting and all the problems that come with it, or killing requests and the originator having to guesstimate how many requests can safely run in parallel to limit the failures.

Anyway, this is probably something for 0.9. Hopefully the network will settle, and we can improve data persistence significantly by simple tweaks such as making nodes more likely to accept inserts. In the meantime, assuming we stick to the original plan, there are relatively few features left before 0.8.0:

  • Sort out disk encryption: Currently we encrypt the client database (node.db4o) if the physical security level is set to NORMAL, HIGH or MAXIMUM. We should make it much clearer that you should install Truecrypt and encrypt your full system (especially swap), but at the same time, it is not acceptable that casual, lazy or computer illiterate users have no protection, especially as if they are only browsing HTML or forums we actually protect them reasonably well apart from the swap issue. Anyway, the problem is node.db4o is encrypted using CTR mode, which isn't good with journaling and block remapping, but fails spectacularly when we implement the next item; I plan to port the GF128 code from the kernel and use XEX/XTS.
  • Related physical security stuff: There are various minor things - encrypting plugin databases (easy), keeping the bookmarks, the list of recently completed downloads, various plugin data in the database (all easy but there are several items). None of this is needed if we pull all the crypto code out but I don't think that makes sense.
  • Auto-backups and related stuff: Sooner or later node.db4o gets corrupted. This is a fact of life, especially on flaky commodity hardware with unclean shutdown etc etc. A solution to this is to regularly make copies of the node.db4o; db4o provides a mechanism for this, and although it isn't quite "online", it's close enough. A serious difficulty is that the node.db4o backup may refer to temporary files or temporary buckets in the persistent-blob file, so we need to ensure that we don't delete/reuse them until after the next backup. However, once we have this, aggressively caching node.db4o to limit the number of disk writes, including possibly not writing it at all for some time and then writing it all at once every few minutes (provided it is relatively small relative to available memory) become both very attractive and easy to implement.
  • Freetalk: Not something I can give a whole lot of help on, although there are some things such as encrypting the database that I need to deal with. In any case, p0s is getting very close to releasing it as the first quasi-official version, available from the plugins list.
  • New Windows installer: Zero3 has a new alpha of the wininstaller, which hopefully will fix the service errors. This version installs to the user installing and starts on login via a normal startup link, rather than installing as a service. It appears that the permissions problems that cause the service not started errors are related to running as a service. Anyway, when we have had a few more successful test reports, and Zero3 gets around to writing an update script, this will become the new official wininstaller.
  • Localisation in plugins: This is important, and fairly easy.
  • GSoC: sajack's work on filters: Spencer (sajack) has written us a range of infrastructure improvements for filtering, some of which was merged a while back, more will be merged in 1272, possibly including support for filtering inserts (e.g. stripping out EXIF in JPEGs, detecting charsets on insert). He has also written filters for ogg vorbis, ogg theora, and mp3, and the beginnings of embedded HTML5 video/audio tags support (dependant on web-pushing). Hopefully we will merge the infrastructure and the new filters; web-pushing won't be on by default for some time unless somebody decides to fix it, it's not a high priority as far as I'm concerned... Another thing we could do on filters is merge the ATOM and SVG filters from last year.
  • GSoC: zidel's new packet format: This should improve efficiency (especially payload %) significantly, allow Freenet to work on lower MTUs, and pave the way for transport plugins. It may or may not go in before 0.8.0. The code is generally of good quality and has many unit tests at various levels.
  • Data persistence work: I still haven't written the new block insert tester, which compares data persistence with various insert tweaks recently coded. This will hopefully confirm our theories as to why a block inserted once is 75% retrievable after a week but the same block inserted 3 times in rapid succession is 95% retrievable, even though it is apparently routed the same each time. Which hopefully will result in dramatic improvements to data persistence.
  • Find a new host for the uninstall survey: It was on Google Docs but has been hopelessly broken for a year. Maybe we can fix it, or find some other host...
  • Consider hierarchical USKs: We have the insert-time support; maybe it's worth thinking about the request-time support.
  • Think seriously about Library: We currently search both the old and new wanna. The old one takes quite a while because it has to transfer a lot of data; the new one is faster but fails relatively often, and is surprisingly slow. We will probably also search gogo, but it may be necessary to make some format improvements (e.g. putting the metadata for each layer into the layer above).
  • Finish the Mac installer changes, and possibly Linux system tray applet: We still get reports of Freenet not working after restart on Mac's, and the system tray applet and memory autodetection don't work. If possible it would be really good to fix this before release, as e.g. a lot of journalists use Mac's!
  • Memory autodetection tweaks: This is trivial but needs to be done.
  • Auto-updater key change support: Murphy's Law dictates that sooner or later we will need to change the update keys. It would be best to have a smooth upgrade path sorted out in advance. This is likely a couple days work.
  • Basic RSKs: Still worth considering, would allow an official project freesite and useful for hostile environments.
  • Native FEC: We need to decide what to do about this. It might be best to leave it out given the problems we've had, but if somebody trustworthy and JNI-aware reviews the code carefully, maybe we should restore it for better performance especially on slow CPUs.
  • Easy full reinserts: This is probably a good idea, although it is no safer than inserting as a CHK...
  • Various work on usability and UI: For example, we need the queue page to have checkboxes for easy bulk removal. We probably want to move the bookmarks to a menu, and use the minimalist theme. Etc.
  • Lots of bugfixes!

Unfortunately, our third Summer of Code student, working on distributed searching and file indexes, dropped out. That would have been very interesting. The top item on uservoice, now by some distance, is "Write a killer file-sharing application"... IMHO a reliable built in forums system and significant improvements to data persistence would go a long way towards this, but easy to use spam resistant distributed file searching and index maintenance tools (similar to Thaw but done right) probably won't be part of 0.8.0.

In 0.9 I hope we will include major changes to load management, full selective reinsertion, basic time-based insert protection, distributed filesharing/file searching, web-pushing, hopefully a solution to the Pitch Black attack, and hopefully much more. More work on data persistence, and possibly bloom filter sharing, are also very important. Bloom filter sharing will make a significant gain, although it may only be worthwhile on darknet or peers we have a very long term relationship with; other network level tweaks may improve performance and persistence more, we need to do a lot more analysis of the network. We had planned to do Bloom filter sharing with Google's last payment, but what we've actually done is IMHO better. Passive requests and secure bursting are probably 0.10 stuff; both fix critical design flaws, so ideally they would be moved up a bit... We'd also like basic PDF and VP8 filters, and transport plugins; these might be Summer of Code projects. Sorting out multi-container freesites is also essential some time before 0.10 or so.

What else is going on? Well, Google decided, without being prompted, to give us another $18K (the third such donation so far!). We were down to our last $100, although I didn't have an exact figure so the website showed $600 - but only for a few days! I have no idea what their agenda is, especially given some of their high ups' recent comments about anonymity, but this means we can continue, so it's clearly a good thing! Certainly they're as close to "the good guys" as multinational corporations get, although of course I am skeptical about such single points of control/failure/censorship - he who can compel Google to block a keyword effectively erases it from the Internet...

Evan quoted my long term roadmap, most of it I stand by; depending on demand, we may move features up or down (e.g. partial reinsert support). Then he disappeared, and I had to take over his GSoC student, and apologise to Google for missing the mid-term evaluation deadline. He's publishing his stats again now though, although I haven't actually talked to him recently; maybe some emergency.

So that's it for Freenet. What else is happening? Well, I've seen a consultant, who was even more thorough than the senior GP when I eventually got to see him (having repeatedly seen locum's and not got very far). I have a bunch of tests scheduled, and hopefully we can confirm what the problem is that has caused me to lose far too much weight. I finally got around to upgrading my computer (having bought the parts ages ago), setting up wifi, and getting mum's downstairs Linux system working again (so dad can get some work done and not have to put up with mum's constant chattering with friends on Skype!), and am looking forward to a holiday in Wales some time in September.

I'm still theoretically looking for more contract work to diversify my business and earn more than I get from Freenet, but this is largely on hold until I am feeling a good deal better. If and when I stop working for Freenet, I may go to university; working for Google or similar is at least somewhat interesting, but I don't think I'd want to work in London, I don't think I could handle the pollution levels long-term; all my family have asthma and we can feel the difference when going out of a major city; it was one of the biggest reasons for us moving from Bristol up to Scarborough.

In recent ponderings about Ripple, IMHO when people start finding things to buy through online social networks, routable friend-to-friend currency will become a whole lot easier to bootstrap, and maybe inevitable. Technically speaking the problems are surmountable; the real difficulty is how to bootstrap it, but IMHO that may not be a big problem for much longer... My view is that full p2p is unlikely at least in the short term because of difficulties with finding good routes and liability issues on downtime, but a few thousand servers (run by different people) that exchange a lot of data with each other could likely find perfect routes very efficiently. Privacy wouldn't be perfect but it never is with money; Ripple isn't about privacy, it's about decentralising value creation and making money map more closely to what people (as opposed to corporations, governments and economists) want and trust. Of course it's a ludicrous experiment, and it may fall flat on its head - but it'd be fascinating to try. Not sure what I think about the circular barter proposal on the ripple list, it'd make sense to try it out in a computer game perhaps...

Meanwhile the world continues to march enthusiastically into the abyss. The latest situation:

  1. The Americans won't do anything until after the Senate elections.
  2. It would take a miracle for the Democrats to gain enough in the Senate elections to do anything in the New Year. Meaning we're talking about after the presidential election, some time in 2013. Which is cutting it awfully close.
  3. Europe is unlikely to move to 30%, but we'll see, I haven't been keeping up to date really. At least the UK, France and Germany seem to support it (despite worries earlier on), but eastern europe can be relied upon to oppose it. This has probably been decided against by now, I haven't been keeping up...
  4. The UN is seriously talking about what happens when Kyoto expires and there hasn't been a meaningful agreement signed. One is unlikely to come out of Cancun, and certainly not without the americans.
  5. The floods in Pakistan, fires/smog in Russia (both of whom had record temperatures in the last couple of months), and landslides in China (but not the flooding in Eastern Europe??) seem to be the result of a major change in the jet stream (not the gulf stream; the jet stream is about high altitude wind currents, the gulf stream is ocean currents).
  6. Stratospheric geoengineering won't save us, because its effects vary so much from country to country that getting agreement on it, even in a 3+ degree world, will be impossible. So it's either wars and world government or wait many decades before we can do fine grained climate/weather control.
  7. And last and most definitely least, the UK government's claims to be the greenest government ever (along with its claims to what what Bush called compassionate conservatism) are looking increasingly shaky.

We could do the right thing. Or we could wait for some magical new technology that is cleaner and cheaper than fossil fuels and takes no time, no investment and no subsidy to be deployed everywhere. Or maybe the next phase of the oil shock will save us. We'll see. I believe in divine sovereignty - but some terrible things have happened, and the good that came out of them only became clear a long time after.

2010/06/23

purple amphibian

Build 1255 etc

A lot of work has gone into build 1255. Most of that work has been related to the metadata layer: When we changed it slightly in 1251, a lot of users got upset about reinserts no longer working. 1255 will provide the ability to reinsert files exactly as they were originally inserted. It also implements a great number of metadata changes all at once, in order to minimise the disruption resulting from CHKs changing in multiple builds.

One big change is that anything inserted under an SSK will use randomised splitfile encryption. This means that it will not be possible to guess which CHKs will be used, even if you know the content that is going to be inserted. The upload form (which is now on a separate page) explains this and allows you to insert as a CHK or an SSK (with it generating a random SSK automatically for you), or specify your own key, or will soon have a reinsert option. This should greatly improve security on opennet, particularly against the mobile attacker source tracing attack that we have discussed before. Random splitfile encryption was actually agreed at the freenet mini-summit in 2007...

The other big change, which is arguably even more important, is cross-segment splitfile redundancy: For any file of at least 80MB, instead of just chopping it up into independant segments of 128 data and 128 check blocks (the size varies a bit according to even segment splitting), we add a second layer of redundancy which operates between segments in a similar way to that used on data/audio CD's. This greatly improves the reliability of very big files, assuming the same probability of a single block being found (or lost). The simulated figures are really quite remarkable, but we'll see what difference it makes in practice...

Other big changes include:

  • Using a single encryption key for a splitfile (halving the size of the metadata)
  • Including the MIME type (this will help with the bugs related to the new filter code), the final file size, and the number of blocks that need to be fetched (so no more jumping progress bars at least for files), in the top block.
  • Including a variety of hashes of the final data in the top block, which are verified before completing the download. For larger files this includes ED2K, TTH, MD5, SHA1 and SHA512 hashes as well as SHA256. We report the hashes via FCP very early on with an ExpectedHashes message.
  • A much better implementation of even segment splitting. All segments are now the same size to within 1 block. In 1251 we could still have the last segment being dramatically smaller than the rest.

Various other changes include making it clear that the only way to get HIGH security level is to use darknet (usability input solicited), a new testing mode for inserts that only inserts to the datastore, various bugfixes, some nice optimisations to threads and temp files, etc.

Evan and FreenetUser have recently brought up the topic of reinserts. Reinserts are dangerous but seem to be a necessary evil at this point. On one level, 1255 significantly improves support for reinserts. You can download a file, it will detect the correct reinsert settings, and then you can specify them on your reinsert. On the other hand, we don't yet have a reinsert option on the upload a file page (we will eventually, it is pretty easy), and the option of inserting to a random SSK with all the keys randomised makes reinserts a little harder. But the real goal is to reduce the need for reinserts: Both better segment splitting and especially cross-segment redundancy will help significantly with this for larger splitfiles. These are both client layer changes: The chances of a single block succeeding are not changed, but better redundancy lets us dramatically improve success rates for (some) whole files.

But there seems to be considerable scope to improve the block level success rate too: We now have 3 flags on inserts which affect how hard we try to send the data to the right nodes. Previous test data suggests we may be able to dramatically improve block success rates, but we have to try this out to see what happens... That would be the big prize though, dramatically improved data persistence would make filesharing on Freenet much more feasible, and may make it a realistic option for relatively unpopular content which is often not available from bittorrent due to no seeds, which is an area Freenet should be good at IMHO.

So the first task after 1255 ships is to write a tester which inserts single blocks, tracks their retrievability after a week, and tries various combinations of flags. The data suggests we should be able to move from a mid-70s% success rate to a mid-90s% success rate, which would have a huge impact. And we can do this without having any practical impact on resource usage, just by making it more likely that inserts will be accepted on the nodes where they should be accepted. We might need to change the actual implementation once we know what the underlying mechanisms are. Arguably this links in very nicely with rewriting the load management (based on the two principles of severely limiting misrouting and only limiting based on the local capacity rather than trying to estimate the capacity of the whole network), but that is likely to be post-0.8; most likely we can come up with an acceptable set of hacks, maybe the ones that the flags implement, maybe not. I need to point out at this point that it was Evan who suggested, and did much of the theoretical work behind, even segment splitting, cross segment redundancy and the new insert flags...

Capacity based load management opens up some fascinating long term options however. Have a look at my post "[freenet-dev] Fast *and* secure Freenet: Secure bursting and long term requests was Re: Planned changes to keys and UI", for a proposal that is half way to long-term/passive requests, that works best on darknet but quantifies what can be done on opennet, and that in ideal circumstances (darknet), not only beats the mobile-attacker source tracing attack but does so while transferring data very quickly, and providing for offline data transfer - all the requests go out at once (so an attacker cannot use the stream to gradually approach the originator), and then the data returns as quickly or as slowly as possible, possibly taking alternate routes when nodes go offline and possibly waiting for nodes (even the originator) to come back online. This is not true passive requests because it only persists after the data has been found, but it goes a long way in that direction and would need the same sort of load management.

In the nearer future, the two things that we absolutely must have for 0.8.0 continue to depend on specific individual volunteeers: Freetalk, which p0s seems to be making strong progress on, and a working Windows installer, which is much less certain, but also hopefully easier. Other important (but not essential) stuff from the roadmap:

  • Automatic backups for node.db4o (fairly easy, main difficulty is we can't deallocate temp files until a backup)
  • Checkboxes on the queue page (easy)
  • Use the minimalist theme, or something close to it, probably without activelinks on the homepage (fairly easy)
  • Eliminate the remaining plaintext files, both in plugins and the node itself (fairly easy, lots of little bits to deal with)
  • Do a lot less disk I/O when a lot of downloads are queued (fairly hard, may not happen)
  • Then it's all bugfixes, minor tweaks to usability and minor optimisations...

Another important point is we need a new format index to include as the default index. Please let me know if you run a Spider (you will also need to run Library).

A very belated answer to FreenetUser's post about OneToOne and private messaging. Freemail actually uses most of the tricks he talks about - in particular it establishes a private channel between two identities so that traffic analysis is impossible after the first contact. However, Freemail is buggy and more or less unmaintained. There have been some patches recently though, hopefully we can have a new maintainer... With regards to the content filter, we can whitelist plugins, we already do this with Library so you can have a search box on your flog (FlogHelper does this). Bombe's fast templating engine might be useful if you're still looking for one; it's faster than HTMLNode's so will probably end up used by Freenet eventually. You will need to deal with trust and introduction in WoT, otherwise you will get spam; there isn't much point using WoT if you don't plan to reuse identities between different applications and manage trust lists IMHO. You can reuse the existing interfaces though. But also it would be awesome to be able to click on a poster in Freetalk, go to his identity's page, click on a button and send him a private message; integration is good. If on the other hand you don't like WoT, you could maybe use hashcash per introduction or something... With regards to logging, as I'm sure you've discovered by now, you can use the log level details setting to show exactly what you want and not what you don't want.

On the subject of removing EXIF, the JPEG filter in the Freenet source can do this, and we (sajack) will soon introduce the option to filter files on insert to strip out incriminating data.

On databases, mentioned by FreenetUser but mainly by In The Raw's Gestalt, I would just like to point out that for any on-freenet database that needs to scale, a solution can surely be constructed based on infinity0's COW btrees structure that backs new format indexes. Have a look at the wiki page on the subject. IMHO we can provide forms of atomicity, consistency and durability; isolation is closely related to multi-user support, and it's likely that multi-user support will consist in merging changes from one fork of a database to another, so maybe a little different to traditional databases - even traditional distributed databases.

Then there's Project Evergreen, a distributed social network based on Freenet that currently has a GUI that doesn't filter content. IMHO something like this is a good idea, I'm just not sure exactly what; traditional social networking functionality is hardly compatible with hardcore privacy, for example, but on the other hand it is essential that users be able to produce content really really easily, and we really have two separate social networks in Freenet - the darknet, and the network of anonymous identities that is the Web of Trust.

Uservoice makes interesting reading as always:

  • 438 votes: Write a killer file-sharing application
  • 338 votes: One GUI for all
  • 200 votes: Use the port 80, 443, 53, 1863 for communication
  • 200 votes: Add a 'pause' feature
  • 198 votes: Implement reinsert on demand

What can I say about this? Improving persistence should help reduce the need for insert on demand and greatly increase the amount of data that can be found and downloaded with no hassle. That doesn't solve the problem of how to find it though. Forums work, but they work a lot better if you can fetch files that people posted a while ago. Once Freetalk is bundled, we can have a message be posted automatically as soon as an upload finishes, making it somewhat easier to post content. Pages with links to files ought to work but have never been very popular on Freenet, probably because of poor data persistence. Hopefully nikotyan's work on distributed indexing will provide a big step towards an easy to use, spam resistant file search system, although due mostly to exams etc we haven't seen much yet.

All the folk who voted for "One GUI for all" will presumably be a good deal happier once Freetalk is official and bundled. The third most popular item is essentially transport plugins; zidel's work will make this a lot easier to implement (as well as greatly improving the efficiency of the transport layer), but transport plugins won't happen any time soon. I'm surprised how popular the pause feature is still, given that reconnecting to opennet is very fast nowadays; maybe it's worth looking into anyway, but I'm not convinced.

And for non-Freenet stuff...

For the record, Nestle gave in; kitkats no longer cause deforestation. (I inserted and referred to a short video about this some time ago). Of course this doesn't help those of us with gluten intolerance...

Hardware wise, I have narrowed down the problems to the Velociraptor and RMA'ed it; the replacement should come tomorrow. Hopefully this will "just work", rather than stalling every few hours as the old drive did; it's a nice fast disk, although of course all these SSD's are tempting. But rumour is Intel will introduce 600GB models this year, in which case it's a really bad time to buy; and anyway, I don't have the money. Then there is the question of a CPU upgrade; I've acquired an X6 very cheaply, but I need a motherboard for it, and possibly to switch RAM types... Mostly CPU isn't an issue, but on the other hand I had zgrep running all night on logs recently and it still hadn't finished, apparently being CPU-bound...

In terms of wetware, struggling on; food, fibre and gastrointestinal tract issues are far from dealt with; I may have to go back to talk about chest issues, and spirometry still hasn't happened; and mucus is a big problem, which seems only to be partially seasonal, and seems related to both other issues, but we have some ideas on how to manage it (if it's not purely due to pollen then dust is probably relevant). One upside is I'm able to get a decent amount of work done lately, especially last week, although it was rather tiring.

Reading wise, I'm not reading as much as I was (which is a good thing, it means I'm feeling a bit better), and am still chewing through various short stories by John Wyndham, and a few technical magazines...

News wise, people are starting to realise that 25% budget cuts are going to be really hard - and that the tax/benefits measures in the budget were pretty much as regressive as you'd expect from a Conservative government. It has been said that modern government is a theocracy of the rather vague science of economics... Lets hope they're right, because if they're wrong and Obama (and the keynesians he listens to) is right, Europe could go the way of Japan... Also it's interesting that the recovery will supposedly depend on strong demand, while demand depends on reckless spending, reckless credit and not increasing VAT (doh!). Of course, I'm not really sure what to hope for here: Long term, consumerism will kill us all... Increasingly I subscribe to the view that this is just one of a series of shocks that are going to turn the world economy upside down. One of the big ones will be the next stage of the oil shock around 2018, which probably won't be cushioned at all by the current pathetically weak efforts to kick the oil habit before it kills us (especially if the rumours are true and the tories scrap the battery subsidy), but there are plenty of opportunities to blow it in the meantime...

2010/06/14

Sorry folks again (pic from Newsbyte)

More chaos

Build 1251, 1252 and 1253 are all because of the same bug, which technically was discovered on Saturday, but because of my not immediately recognising the implications fixes were not fully deployed until today. Plus I was out shopping in the morning. Thanks to everyone who has reported bugs about this and nagged me about this - particularly to the original reporter on Saturday who I got into a verbal fight with after he didn't want to leave his contact address for future testing. Sorry about that, you did us a great favour and if I'd been more graceful I might have recognised what was going on more quickly.

The bad news is that the bug is potentially rather serious, and also goes way back: Odd-shaped splitfiles could cause the native FEC code to cause segfaults, or glibc memory errors. For all I know this is exploitable for remote code execution, although there is no evidence anyone has found out how to do this yet. It's a wierd bug: Using more than 100% redundancy is known to work with the onion FEC codec if you use the C API, so it must be something to do with the JNI wrapper (all this has been discussed on the chat today so is public knowledge regardless, as well as being mentioned in the commit logs).

There is however evidence that somebody was watching the #freenet chat logs, or the commit logs, with a fine toothcomb on Saturday; he has inserted a splitfile which triggers this bug in pre-1251 nodes as a Frost message. The reason we picked it up on Saturday in the first place was that 1) 1251 inserts splitfiles with >100% redundancy (it adds an extra check block to each segment of less than 128 blocks i.e. to most segments given even splitting), and 2) aforementioned person on #freenet bugged me about it when his system kept segfaulting consistently on fetching a specific key. Unfortunately I didn't realise then that splitfiles deliberately inserted with >100% redundancy would still cause older nodes to crash, and worse that segfaults can usually be turned into remote code execution vulnerabilities with a certain amount of work. I guess I'm still not 100%, although I'm certainly getting more done lately. Anyhow, by Monday, the attacker had inserted this key, which was consistently causing segfaults for some users (and another user reported a glibc memory error). Fortunately one of aforesaid users chased us up on #freenet. One lesson from this is if something is important nag us until we fix it! As regards potential compromises, only nodes which use the native FEC code are (were) vulnerable, so only 32-bit x86 linux and windows systems (mine is AMD64). It is possible that the few nodes testing freenet-ext.jar version 27 might have been vulnerable on other platforms, but this is uncertain, the newer code might fix the bug entirely. Ximin (infinity0) will have a look at this later this week hopefully.

Of course, there is a significant amount of other code in 1251/2/3, and I may as well explain it here. Firstly, splitfiles are now split up into segments evenly: rather than all but the last segment being exactly 128 data blocks and 128 check blocks (so the last segment sometimes has 1 data block and 1 check block, meaning it will get lost way too quickly), the segments are all almost the same size (the last one is only a few blocks smaller). And various other tweaks. This should help quite a bit with data retrievability for those files where the last segment is significantly smaller than the rest. And it's what ultimately kicked off the FEC errors...

The other big change is the content filter. A big chunk of sajack's work has been merged, filtering is now in the client layer rather than fproxy, it operates on streams rather than buckets (which will be used to significantly reduce the amount of disk I/O needed soon), and so on. And you can filter downloads to disk from fproxy, and downloads via FCP. Spencer (sajack)'s future work will include further optimisations and filters for more formats (e.g. mp3).

One other thing going on: A new version of Library. Previous versions were inserting new-format indexes in whatever the local character set was, and parsing them in it too. This was resulting in the indexes being unusable. We now force UTF-8. If you have been running the Spider you may have to dump its progress (shut down and delete library*, some of which are directories). A workaround was apparently found on Frost; those using this workaround should be able to continue with the new code.

I had begun to start thinking about cross-segment FEC, but it turns out that we need binding content hashes first. So I have a branch for content hashes in metadata. We will take the SHA-256 hash of the original, uncompressed content, and put it in the metadata. This will then be verified when the download completes, providing an extra level of integrity checking in case of bugs (useful for e.g. auto-updates, although they won't use this for some time). It's also useful for cross-network stuff - a file with the same hash is the same file, whichever network it is on. We will include other hashes on larger files and/or on demand, since other networks often use different hashes (e.g. sha1, sha512, md5). Also, we can use the hash to generate a single key for all the blocks in a splitfile, and thus we only need to store the routing key, not the crypto key, saving 32 bytes per block and halving the size of splitfile metadata (at the cost of some coalescing). And once we're doing that, we can offer the option to randomize that key for the paranoid, because predictable content (reinserts, but also e.g. inserting a specific leaked document or video) makes life a lot easier for an attacker trying to trace somebody inserting content. Of course we would allow reinserts with the same random crypto key for those willing to accept the risk, and we might eventually end up doing some healing between splitfiles with the same hash but different crypto keys, or even elaborate schemes where the insertor uses a random key and then some of the downloaders (who are presumably less vulnerable) reinsert it with a known key... But the immediate application will be to use the overall content hash to generate "random" numbers needed for cross-segment FEC (we will need to use a different hash to seed this than we use for the crypto keys, probably will use hashing the hash with a constant, like JFK uses).

2010/06/09

:(

Sorry folks

1249 and 1250 both fix serious bugs:

1249: freenet.ini doesn't exist: 1248 seems to have triggered many Windows nodes to update their config. Unfortunately, because of an older bug, when we do this we create freenet.ini.tmp, write the data to it, forget to close the file, can't rename it over the old one because we didn't close it, remove the old freenet.ini, try to copy over, that fails too, so we end up with no freenet.ini. And then the launcher refuses to start because there's no freenet.ini. This only affects Windows but is pretty bad. If you're affected (if you're reading this over freenet, as opposed to over the web, you probably aren't), rename freenet.ini.tmp to freenet.ini manually.

1250: Writing inserts to the client-cache (and possibly the datastore): Inserts were written to the client-cache. They were not supposed to be. This means if you insert controversial data, it will be in your client cache, and if your computer is seized, the bad guys may be able to see it. Note however that if you view the content you've inserted (after the 30 minute period immediately after) it will go into the client-cache anyway; the client-cache is pretty dangerous, you should secure it properly (set physical seclevel to maximum and restart when you want to clear it, set it to high and don't give the bad guys the password etc). Also, in the rare case of an insert via FCP where the client software explicitly asked for the data to be cached in the client cache, it might have been written to the datastore - and hence be remotely probeable.

Sorry folks! However, 1250 also has major improvements to the USK code, as I have described in the announcement mail, which I quote here:

Freenet 0.7.5 build 1250 is now available. It will be mandatory on the 15th. Changes:
- Fix a bug which resulted in inserted data being cached locally in the client-cache. It's not supposed to be cached at all on the inserting node. If you think your computer might get seized and the data you have inserted might incriminate you, please wipe your client-cache (e.g. by deleting the files in datastore/ containing the term "clientcache"). Also if your insert was configured to write to the client-cache it may have been written to the datastore, in which case the inserted data may be remotely probeable, but this could only happen if you did an FCP insert and your client explicitly enabled writing the insert to the client cache.
- Much improved support for USKs. USKs are an updateable key whose latest version is tracked by the node, but at the client layer, in a somewhat unreliable way. When you visit a USK-based freesite, fred searches for the next version in the background; eventually we will provide a way for it to tell you when it finds one. You can ask it to do a search for the latest version by changing the edition number to be negative; this will take some time. Bookmarks also rely on this, as does Freetalk, and ARKs. Some major improvements in this build to USKs:
-- The first time you visit a USK, we probe the datastore for later editions.
-- Cheaper (by doing fewer requests) *and* faster (by doing random fetches of future editions) searches.
-- Faster pick-up of the next few editions if we are subscribed to a USK and we have the latest version already and are waiting for the next version to inserted, using ULPRs.
-- We insert edition hints based on the current date (year, month, week and day) when we insert a USK. This has been planned for years and will enable us to quickly find the latest version if the edition we have is way out of date. We don't yet use these hints, that will be implemented later.

The next step is to merge sajack's content filtering infrastructure work. After that I will probably tackle even segment splitting, probably the new MHK tester, and then maybe cross-segment redundancy. There haven't been any major objections to the roadmap (or even much in the way of comments on it) so far, not even Ian's usual emphasis on a prompt release, but I guess without Freetalk there's no immediate rush...

2010/06/07

Of spiders and releases

1248 and the Spider

Freenet build 1248 is now available. But more importantly, the new Spider plugin and major changes to the Library plugin are now released. These depend on 1248, so a brief changelog to encourage you to upgrade:

  • HTML filter now avoids rearranging attributes if it can.
  • Make announcement happen a little earlier.
  • Run web interface threads at higher priority when possible.
  • Re-add the 3 actively maintained developer flogs.
  • Russian translation update.
  • Lots of internal changes for plugins.
  • Logger changes, we now buffer for a configurable period, defaulting to 60 seconds.
  • Code cleanups.
  • Require the new Library plugin, upgrade XMLSpider if loaded, and offer Spider.
  • More datastore stats.

Thanks to toad, hungrid, p01air3, xor, sajack, zidel and nikotyan.

The new version of Library and new Spider plugin together allow creating new format indexes by spidering Freenet, in much the same way as XMLSpider allows creating old format indexes by spidering Freenet. The new index format is far more scalable than the old format, and should improve performance both of the spider and the client (Library) doing the search. It has other benefits which I have discussed elsewhere e.g. it is possible to fork a new format index, merging your data to somebody else's index to create a whole new index without having to reinsert everything.

New format indexes are automatically inserted by Library on behalf of Spider. Data from Spider is first written to library.index.data.<number>, then merged into an on-disk index in library-temp-index-<different number>, then merged from disk to Freenet. Only the data that has changed (and the nodes above that data in the tree) are uploaded, and once the index has been updated, a USK is inserted, which is logged in wrapper.log (you may have to grep for it, it's rather chatty).

New format indexes include ranking data, and are functional now in much the same way as old format indexes are, although they should be faster and the user interface during loading isn't quite the same.

Both new and old format indexes now index numbers of 3 characters or more. Also as of the work from the previous build we have reasonable Chinese support.

To use the new Spider, you need to load the Spider plugin (not XMLSpider); Library will already have been upgraded by installing 1248. Then configure it. The only essential options are the number of requests to run, which are two options right at the top. You can also configure keyword blacklists if you want; these are useful both for shameless censorship (or being able to sleep at night depending on your point of view) but also for excluding known spider traps and spam sites. The code will do pretty much all the rest of the work, as you can see by watching wrapper.log. It will spider Freenet, write the data to disk, merge it to big on-disk trees, then merge the big on-disk trees to Freenet, then upload a USK for each one. All of this happens in parallel. Your first USK should be uploaded within a day or two. Expect the process to slow down significantly over time as the index gets bigger (currently my test indexes are taking 7 hours to finish uploading but I expect it will be several days when it gets a bit bigger) - but then the spider itself slows down significantly over time as it runs out of easily findable stuff. I have been testing with max memory set to 1024 (1GB) in wrapper.conf so that should be enough. Note also that we cache everything we upload in library-spider-pushed-data-cache/ , which is currently not garbage collected; you can remove it but then you are relying on the data being retrievable when we need to update it.

A note to would-be standard index authors:

YOU MUST BE ANONYMOUS. I have been running the spider for testing purposes but I will NOT publish any produced indexes. You should publish anonymously, and take your anonymity very seriously, as you are a clear single point of attack/vulnerability; if you are taken down this affects the experience of Freenet for everyone but especially newbies. Note also that if you censor anything, and are traceable, you may be legally compelled to censor everything that is illegal in your jurisdiction. Create an identity (in Frost, FMS or Freetalk) solely for announcing your index and post the USK there. When an index is popular enough and complete and consistent enough, we will add it to the default indexes list.

Roadmap

Historically we haven't been very good at doing things in priority order, and it can sometimes be difficult to tell what's going on, or what needs to be done next. Financially speaking we have a brief breathing room, and there is much to be done on Freenet: the developers have a better grasp of the outstanding challenges than we have had for quite some time, and it is tempting to just get on with it, implement lots of important features because if Freenet works, people will use it, release or no release. However, 0.8.0 is significant for publicity reasons, and sooner or later - hopefully sooner - Freetalk will be ready and we'll have much less of an excuse for further delays. And our finances are far from secure, we have around 15 weeks including my last (in advance) payment. So lets talk about priorities. I list below most of the important things that need to be done in the near to intermediate future. Many of these may be dropped in the name of getting a quick 0.8.0 release, but many of them will be so transformative that we should seriously consider getting them out before 0.8.0. This is an expanded version of a mail I sent to the devl list today.

Immediate todo

  • Release Library, release Spider, and fred build 1248. I have done this as you read. However I will keep Spider running for a week or two to make sure there are no more OutOfMemory / constant Full GC bugs.
  • Finish work on USKs and merge it. The usk-improvements branch is subtantially improved relative to the old USKs code, but there are further optimisations to do, and in particular I mean to implement date-based hints, which will enable us to catch up quickly on a USK even if we only have an old edition number. It is important that the background fetch after visiting a freesite works (one day we will notify the user either via a system tray applet or a web-pushing pop-up), but I am not certain exactly how much work we will do to check for later USKs when fetching them with positive edition numbers; so far we check the datastore before showing the page.

Critical stuff depending on other people

  • Freetalk and WoT. p0s is making good progress, very few FIXMEs left but a number of critical bugs in the bug tracker. ASAP these should be made semi-official (loadable from plugins page with a caveat), and then official. FlogHelper also when WoT is done.
  • Windows installer. Zero3 is working on a version that doesn't run as a service. This should fix the critical release blocking problems we have now. He has also done some work on various tweaks e.g. automatic memory detection, on a branch; that needs testing, merging and deploying (I can probably help somewhat with this).

Important stuff depending on other people

  • Mac installer/system tray. mrsteveman1 has started work on a pure java system tray applet which apparently will work on new macs, and should work on linux too. Any izPack gurus please contact him, as he has had horrible problems trying to get any bash scripting to work during installation.
  • Memory autodetection on Mac. Hopefully we will have this too?

Critical ongoing stuff

  • Slow opennet boostrapping - again! I don't understand why opennet bootstrapping is slow again. We (mostly I, but with lots of helpful advice from vive and evan) did a load of work on it earlier this year, and dramatically improved it, but it's back to the old pattern of successfully connecting to the seeds, announcing to lots of nodes, and 95%+ of the nodes we announce through not wanting anything to do with us. It is possible this is an attack, or maybe we just don't have the capacity ... or some wierd bug has turned up. In any case this is something we need to address periodically but especially just before a release.

Important easy feature work

  • New MHKs tester. Determine whether fork-on-insert makes a big difference, and whether we still have a big benefit from triple-inserting. Will take some time to get results but shouldn't take more than a day or two to implement.
  • Even segment splitting. Currently it is possible for a splitfile to have 50 segments of 128 blocks (plus 128 check blocks) and then one segment of 1 block (plus a check block). This is not acceptable! Splitting the data more evenly will help reliability of many files.

Easy optimisations

  • Datastore optimisations. Increase the size of the Bloom filter so it doesn't over-fill so easily (resulting in sky-high false positive rates). Make sure we reconstruct it when we need to. Test these mechanisms. Consider additional bitmaps in datastore to limit disk I/O.
  • FEC striping. Don't stripe segment decoding, read everything into RAM, decode entirely in RAM. It's only 4MB!
  • Memory autodetection tweaking Various parameters such as the maximum amount of memory used by in-RAM temp buckets could be configured from the memory limit, which itself is autoconfigured on recent Linux installs, and soon on Windows and Mac too. This might significantly improve performance in some cases - the alternative is temp files which are not only on disk but also usually encrypted. And there are other tweakables such as the number of parallel (de)compressions to run, which could also make a significant difference.

Fairly important security work

  • Updater keys. Make it reasonably easy to change the updater keys. This is rather more difficult (disruptive) at the moment than it should be. Sooner or later we will need to do this, and it needs to be easy.
  • Eliminate plaintext files. There are still plaintext files with potentially incriminating user data in, for example the bookmarks list. These must be moved into node.db4o, which is encrypted according to the physical security level. Many plugins also create similar on-disk files (e.g. library index list etc), or even databases; we should move these into node.db4o via the plugin storage API, or encrypt their databases using a key stored in master.keys via some appropriate API. This sort of stuff is important if a node is seized by hostile authorities, so vital for use in various hostile environments - Iran (80% dialup but maybe still useful to a minority), China, etc.
  • Consider basic RSKs. Revocable Subspace Keys have been planned ever since 0.7 was started and before that. Basically these make it easy to announce that a freesite's private keys have been compromised, and possibly negotiate a new location based on multiple "trustee" keys. This is essential for an official project freesite, or any other freesite where you might download software from; but it is also useful for hostile environments or anywhere where a freesite being compromised might have serious consequences. The first stage, without support for negotiating a new key, should be relatively straightforward, and is worth considering for 0.8.0.

Important ongoing stuff: data persistence

When we have enough data from the new MHKs tester, we will probably need to make further changes to routing, storage etc to improve persistence. If it is still true that triple inserting data drastically improves its longevity, we need to know why, and we may be able to adapt inserts to achieve the same result with less than the cost of inserting the same block three times.

As part of this effort we will need random-routed tester requests at some point. These gather data such as average quantised datastore size. The data they gather is separated from any other data so is more or less useless to an attacker. Also, a variant will do displaced fetches/inserts, which are useful for testing, but also offers an "is my site retrievable" feature.

Larger features that are worth seriously considering

  • Cross-segment FEC. This would make sharing large files over Freenet significantly more reliable, boosting overall success rates for a given block success rate dramatically (although the minimum block success rate for overall success isn't boosted by as much). Currently, we divide files into 4MB chunks and encode each one separately, with its own check blocks. The problem is, for big files, the likelihood of one of those segments failing becomes very large. The solution is an extra layer of redundancy. Cross-segment FEC is very similar to the encoding used on CD's and CD-ROM's, which is highly reliable. The alternative is LDPC codes, but it appears a lot more work will be needed before this is possible, and LDPC codes would require larger code changes. Cross-segment FEC is a moderately big job - weeks not days - but should dramatically improve Freenet's usability for sharing large files.
  • Filesharing database usage optimisation. We do a lot of disk I/O at present if we have a big queue and this is quite disruptive to people's computers. There are many things we can do to reduce this. Again, weeks not days, but maybe worth it.

New freenet-ext.jar

  • Native x86-64 FEC. The new freenet-ext.jar includes 64-bit native FEC libraries. This is a nice optimisation, speeding up decoding of splitfiles significantly. We've always been a bit reluctant because testers have occasionally had segfaults and similar, but recently TheSeeker has been saying his segfaults don't seem to be related to whether he uses the new jar or not.
  • Merge the ATOM filter. This is another outcome of last year's Summer of Code which has yet to be merged. It needs the JDOM library, which is somewhat hard to build cleanly (as opposed to being utterly horrible to build cleanly, like the monster GWT, but we only need GWT at build time). In any case, it is long overdue: ATOM feeds over Freenet would be great. The older (but possibly more popular?) RSS aggregation format we don't yet have a filter for; it's harder to parse, but eventually we will want one. Also the same student created a partial filter for SVG, which would be worth finishing and integrating eventually. However that sort of thing can take quite some time (it certainly did for the new CSS filter), and is not an immediate priority.

Key usability stuff

  • Automatic backups for node.db4o. IMHO this is very important.
  • Better network security selection. As we've discussed, it is important to ask the user first whether they want opennet or not, and then have the security level choices follow from that. This will lead to less confusion and disappointment.
  • Checkboxes on the queue page. This is a popular and relatively easy request. One of our prospective Summer of Code students went half way to implementing it.
  • New theme. Seriously consider making the bookmarks a menu and using some variant of the minimalist theme.
  • Responsive queue page. Cache the top level of the persistent requests queue in RAM so we can always generate the queue page. Tell the user when we can't complete an operation quickly.
  • Loads more small bugs and issues. We have many small usability bugs in the bug tracker, and many small bugs that have usability impacts. We need to fix some of them.

Summary of short-to-mid-term roadmap

  • Current work
  • Freetalk
  • Windows installer
  • Ongoing: Opennet bootstrapping
  • Easy features
  • Easy optimisations
  • node.db4o auto-backup
  • Other important but easy usability work
  • Consider: Security work
  • Consider: Cross-segment FEC
  • Consider: New freenet-ext.jar
  • General bugfixing.

Longer term...

I reluctantly propose to postpone binding splitfile hashes and many other things as not being vital for 0.8. There are many other bugs and important features which should be implemented after 0.8. Most of them are on the bug tracker.

Summer of Code

nikotyan (who is currently busy with exams, as his mentor, infinity0, was until very recently), will be working on distributed indexing. This could and should result in a good spam-proof file searching system.

sajack (mentored by nextgens) is working on content filtering. He has posted a pull request for architectural changes moving filtering into the client layer and out of fproxy, which will allow some important optimisations for big files when he gets to work on audio/video filters later on, as well as allowing us to filter data downloaded direct to disk. He has also given us the HTML changes mentioned above. If he has any time left at the end of summer he may start a minimal PDF filter (the PDF spec is huge, but a great many PDFs use very few features).

zidel (mentored by evanbd) is working on a new low-level packet format which will use padding to transmit data, thus greatly improving payload percentages, work well on small MTUs, and pave the way for transport plugins (especially with small packet sizes). So far there have been some questions about refactoring and some code I haven't looked at yet. I'd appreciate a status report if zidel has got far enough to make one.

Local stuff

In non-Freenet news, feeling very slightly better, particularly digestive system. Chest isn't great but might just be a cold; if it's a chest infection it will sabotage the spirometry, but they scheduled it in 6 weeks to try to avoid that. Got over 31 hours of work done last week - admittedly only 27 of it billable to FPI. Nonetheless, looking up. Looking for contract work semi-seriously, partly because I'm gonna have a load of hardware costs in the near future and financially iffy even without that, partly because FPI is precarious as always, etc; contact me if you are interested in a cheapish java programmer with extensive experience in Freenet (networking, crypto, web interfaces, db4o, etc, but sadly no SQL/J2EE, which seems to be what most people are looking for), see my homepage.

With a great deal of luck AMD will RMA thomas's X6, in which case I will have a new CPU, twice as fast as the current; even if they don't it may be worth risking a cheap motherboard in an expendable box as there aren't that many bent pins and as far as I know the reason it doesn't work is simply that he put it in a motherboard that doesn't support it... We'll see how generous they are feeling (as I've explained the situation is their fault - it should be possible to remove the fan without pulling the CPU out, as it is with Intel's better-designed slots). In either case I'll probably upgrade to 12GB of RAM, and T's two old motherboards will find homes in other systems around here (family of geeks; he's upgrading to a Core i7). I've also been wondering about getting an SSD, it would rather be overkill though... Hopefully when I'm not running the spider any more Servalan will behave a bit better, although of course the disk I/O optimisation work I mentioned above should also help significantly - not only with my system but with everyone else's too!

Politics and technology

Meanwhile the politicians are rumbling about sharing the pain, buy-in and fairness... while talking openly about "tough decisions" on pay, pensions and benefits. Expect drastic cuts in benefits, generalised tax rises (although we may see some semblence of fairness), and of course "difficult" spending cuts... Meanwhile, an EU carbon cut of 30% by 2020 seems to be dead, with France and Germany opposed, presumably joined by Poland etc. Miracles can happen; but no sign of one at Bonn so far, from the minimal coverage I've seen. Exclusively developed world professors talking about voluntary action to create jobs and improve energy security, while america's easy option for energy security continues to pour vast quantities of crude into the coastline; the most basic, essential projects using current technology continue to be blocked because they spoil the view, and so on ... Voluntary action is code for no action, business as usual, deploying technologies only when they are perfect, or at least cheaper than fossil fuels (carefully avoiding accounting for the delayed costs of carbon, the immediate human costs of pollution, and anything else that can be safely thrown out the window in the quest for a short-term profit). When I was in Copenhagen there were posters up arguing to develop tech now and cut later; the fact of the matter is (at least the consensus view) that if we in the developed world don't deploy the technologies we have now (which cost a little more than the fossil-fuel equivalents, but create jobs in an industry that won't disappear overnight in a puff of red numbers), the cuts we will need later will be of such magnitude as to be infeasible without major pain, and likely require large scale geoengineering as well. This is the Last Parliament: If we don't start to make major progress by the next UK election in 2015, life gets a whole lot harder. Meanwhile, the oil shock will be on us (again) in less than 10 years, and proactive action now will likely save us a lot of trouble.

A few technologies that may eventually change the game by being cheaper than the alternatives are electric cars (assuming lithium-air batteries, 10 years+, and widespread infrastructure, more than that; car companies estimate 10% of sales could be electric by 2020, it's just way too slow; there are tens of thousands of deaths from air pollution per year just in the UK, even ignoring climate), and kite power (which according to estimates published in New Scientist should eventually be cheaper than coal, while providing reliable power because it reaches into the jet stream), and the Green Grid (when it blows in russia it's not blowing in england etc). Meanwhile, those who invest in BP for their retirement get exactly what they deserve; ah well, not quite.

Books, technology, justice, sin and society

Reading-wise, just finished The Chrysalids, which is an interesting Wyndham tale featuring a post-nuclear-holocaust world, mutants, telepathy, inter-species ethics, and more; many of these being topics that come up in his other works, which are well worth reading (albeit perhaps at arm's length!). IMHO a human subspecies isn't going to emerge spontaneously any time soon, and I don't believe radioactive fallout would necessarily result in one either; in any case that particular horror has receded slightly (but not by as much as people assume, it's gonna be a rough century). However, technology presents many such issues; I strongly recommend Rainbow's End (Vinge, 2006), which is an interesting tale which manages both to be largely ideas which are either inevitable or are being talked about very seriously by scientists, think tanks etc, and to push forward his singularity agenda (singularity here not being artificial intelligence as such, but enhanced intelligence, including telepathy). Of course many such ideas (e.g. really cheap biotech and its consequences both very positive and very negative) are likely to be further off than he envisages, and some of them may turn out quite different: Re bioterror, the good guys benefit too, and have way more resources than the aggressors, and re telepathy, much comes down to how fast you can compose text in future eye-tracking, brainwave or direct neural interfaces. Smartphones seem to be an important development, which may result in useful augmented reality and the aforesaid ubiquitous computing interfaces; one day I'll get one, maybe an N900, the tradeoff between the principle of open root access versus the possible employability benefits of Android is tricky... Of course, setting up wifi locally would be a first step. One thing to note is that reading in bed from a laptop is not conducive to a good night's sleep; an e-reader is far preferable physiologically. And all these gadgets - and the upgrades that seem essential to make Servalan behave itself - cost mucho money. That can be reduced significantly by time, Moore's law, and so on; but there will certainly be various forms of digital divide for some time to come.

And that's before we even get onto the topic of enhancement drugs (ritalin, modafinil, etc). Personally I don't smoke (duh), drink (matter of taste mostly, maybe genetic), or take illegal drugs (quality control), and I consume relatively small amounts of caffeine, but well, Jesus took drugs, and helped others to take drugs, so I don't have any moral problem with it (and don't try to argue that alcohol isn't a drug, it's ranked #4 by the UK advisory committee on aggregate societal and individual harm, after only cocaine, heroin and one other). I'm also rather skeptical of any claims about chemicals being safe (despite being a geek); for quite some time we bought organic food, partly because dad's guts seem to be able to tell the difference, until finances made that impossible to sustain. The point is even before we get into issues of human genetic selection and in the long term human genetic engineering, there are a great many things that divide us within "what it is to be human". On a more practical level, 2 billion people don't have access to decent sanitation, nearly a billion don't have access to clean drinking water; the global poor are far more vulnerable to natural disasters, frequently have no access to capital, and are stepped on by the powerful in ways that we middle class citizens see much less often and much less blatently. We think we're just hanging on, because of our perception of what we need, based on (mostly) artificial scarcity resulting from the market - they really are just hanging on. And frequently it's our fault - broken promises, outright exploitation, discredited ideologies, and so on. Often we can make a difference as citizens and consumers.

I guess the point, if there is one, is that 1) the powerful, who have all the advantages (access to technology, capital, hygiene, etc), don't need to exterminate the weak, because they can either ignore them, exploit them, or blithely destroy their future, and 2) human nature is the fundamental problem. Can human nature be changed? Lewis said no (more or less; see The Abolition of Man and to a lesser but more entertaining degree That Hideous Strength); Egan says yes (see e.g. Chaff in Luminous, another Egan collection of short stories). An individual human can change himself, although there are limits; technology may change what those limits are, but ultimately the difference isn't all that fundamental, it still comes down to your choices as a moral actor. If you choose to be purely self-interested in a short-term and cynical sense, technology may be able to help you; if you choose to care about other people, again technology can help you. Even if what it is that is making the choice is itself malleable (which of course it is to a degree, even without technology), humans are responsible for their actions, and they frequently choose very badly.

The other book I read recently was Greg Egan's Oceanic. This contains some fairly blatent attacks on religion, and some equally blatent techno-utopian content, but as an honest ideas writer, he also talks about the problems of technology. There is a spectrum of ideas from post-scarcity economics (essentially Star Trek economics, meaning that everyone can afford the basics of life because the basics of life are ridiculously cheap; arguably this is practical now, nobody really wants to sit on the couch watching TV all their life unless they have bigger psychological problems, see the Citizen's Income) up to people as software, with backups all over the place, living as long as you like (often, but not always, conveniently glossing over the issue of fertility in a finite universe), invulnerable to any form of interpersonal harm. IMHO again this does not change the fundamentals. It might be centuries before we can upload consciousnesses, if the Penrose/quantum tubules lot are right, but fundamentally if people interact with each other there is the potential to hurt one another. In a society with a legal system, the forms of harm are supposedly limited by deterrence (although in practice deterrence is never enough). In a world where nobody need starve or be enslaved (for example being forced to be exposed to hazardous chemicals), these are further limited; in a purely virtual world (as in e.g. Diaspora), these are again limited. But a great deal of harm in the real world comes at the more refined level: Betrayal, social exclusion, unkind words, psychological bullying, and so on. Of course if physical harm was impossible the edge is taken off much of this. But does that mean that if we can provide our needs for practically free and back up our consciousness so that even if we could be physically harmed we could always come back, concepts such as sin are outmoded and salvation is unnecessary? Not in my book.

2010/05/27

Of many things...

Freenet: New format index Library/Spider progress

Lately I have been working on the new index format for Freenet keyword searches (the format has many other Freenet applications). As I've explained, this is a massive improvement on the old format: It is scalable, it can be forked easily, it supports selective uploading when merging new data into an on-Freenet index, and various optimisations promise to make it (at least for popular indexes), really fast. It is also the topic of one of our Summer of Code students, who is supposed to be working on distributed searching - using the Web of Trust to find indexes, and searching all of them (with lots of scalability optimisations), and making it easy to publish and maintain your own file indexes within this so that we can start to have a decent filesharing search system. The new index format is needed for its current primary application as soon as possible, it will have a big impact on the newbie end-user, because it needs to fetch a lot less data to answer a search and hence is faster, and it's a lot easier to maintain a new format index; so I am working on it now as it's a big and relatively easy win, infinity0 having done most of the work for me.

Getting it right has been a lot of work and an awful lot of waiting for computers to run tests. Probably the biggest area has been making it work well within a limited memory budget. Recently this involved reorganising update() to process the data in order in such a way that the queues didn't explode, a neat bit of optimisation by queueing theory (and no I've never studied queueing theory). Anyway, right now, if you load an up to date (git) version of Library, the new Spider plugin, configure it (set the number of jobs options, the buffer size, and possibly the bad keywords list if you want to exclude specific sites e.g. spider traps), the Spider will happily spider Freenet, feeding the data to Library in large chunks (containing a series of keywords and a bunch of sites and word locations where they occur), which will upload a freesite index, updating it with each such bundle. It automatically inserts a USK for each such upload - we don't display this in the UI at the moment but it is in the wrapper.log / standard output.

The problem, of course, is that this is slow, and it gets slower the bigger the index gets, so Spider ends up waiting most of the time for Library to do its uploads (even though we allow quite a bit of slack). If we assume the on-Freenet index we are updating is very large, we'd expect it to have to update one subtree for each term, plus one leaf of the main tree, plus all the nodes above that leaf. Making the leaves or nodes smaller won't help much because it undermines compression - the nodes are several megabytes of programmer-readable text, but they compress really well down to a relatively small number of keys. So what we will have to do is merge the data from Spider into an on-disk tree, and then periodically merge that (much larger) tree into the on-Freenet tree (creating a new temporary on-disk tree at the same time). The resulting uploads to Freenet will take a long time but should be reasonably efficient due to e.g. overlapping terms from many smaller updates.

Freenet: USKs and Freetalk

Freetalk is rapidly approaching a new jar and hopefully semi-official status (i.e. inclusion on the plugins page with a warning). It has around 22 critical FIXME's left for p0s (xor) to fix before that can happen; many of these represent serious bugs that could be exploited for a denial of service attack, so they have to be fixed before we can make Freetalk easily available from the plugins page, even with a "caveat emptor" warning.

Both Freetalk and FMS use "outbox polling" to deal with spam: Frost, an earlier chat system, used a single global outbox, writable by anyone, to post messages; this was implemented as a KSK queue, so a message might be KSK@frost-boardname-date-index - where the last part, "index", is just a number incremented for each post that day. As has been demonstrated by some helpful person, who on Freenet can remain anonymous forever, and whose motives could be anything from benign to malign to sheer curiosity, it is very easy to break this system, precisely because anyone can post to the global queue. This is why Freetalk and FMS use a different system: Each identity has a separate outbox queue on which it publishes messages, identities trust or distrust each other in a Web of Trust, and your Freetalk will watch the outboxes of all the identities it trusts. Of course this cannot possibly scale - but neither can Frost. In practice there are several reasons to think we can make it work up to a reasonable size - we propagate "hints" about recent messages, Freenet can be reasonably efficient if lots of nodes are polling the same key, we don't necessarily have to poll every identity that we indirectly trust at the same frequency, and so on. Anyway, FMS's outbox queues are per-day, that is, you have something like SSK@blah,blah,blah/messages-date-index. Freetalk's messages are not per-day, we have SSK@blah,blah,blah/messages-index. FMS's approach has the advantage that we can immediately find the first message for a given day; the drawback is we need to figure out which message is the last for that day. Freetalk's approach desperately needs date-based edition hints - something like the index number for the first message in a day inserted to SSK@blah,blah,blah/messages-date. Freetalk does not yet implement this (although the other hints I've mentioned mean it's not quite as bad as you'd think), and it relies quite heavily on USKs. For a long time, it has been planned for USKs to have a similar mechanism...

USKs are a kind of updateable key implemented in the client layer. CHKs and SSKs (and KSKs, which are a kind of SSK which everyone can write to and thus is not very useful as shown by Frost), are implemented by the node. USKs are essentially a sequence of SSKs. So USK@blah,blah,blah/sitename/number is actually SSK@blah,blah,blah/sitename-number. But when you fetch a freesite by a USK, the node will check whether it knows of a more recent version of that USK, and if so, redirect you to the new version. It will also start a fetch in the background for later versions of that USK; in future (when web-pushing is fixed and enabled by default), Freenet will provide a pop-up notification if it finds a later version of a site while you are browsing it. You can also force Freenet to find the latest version, by making the number negative (note this will take some time); or you can add a site to your bookmarks, and your node will "subscribe" to it, checking for updates frequently for as long as it is bookmarked. This subscription mechanism is what Freetalk does so it doesn't have to poll the slots itself.

The problem is, USKs are not terribly reliable. When you click on one, unless it is bookmarked, there is a good chance it is way out of date. Even if you subscribe to a USK, it can take the node a long time to catch up if you started at an old edition. There are nasty bugs related to multiple clients subscribing to the same USK at different editions, and so on.

So while waiting for the Library/Spider combination to run out of memory, I've been making a start on long-overdue serious improvements to USKs. The most critical thing will be date based hints: Inserting a hint containing the latest edition to a key determined by the date. These will be hierarchical, and based on human-readable dates, so on inserting a USK update USK@blah,blah,blah/toad/23 we would also insert to USK@blah,blah,blah/toad-USKDATE-2010, USK@blah,blah,blah/toad-USKDATE-2010-05, and USK@blah,blah,blah/toad-USKDATE-2010-05-27 (the exact patterns have not been finalised, IMHO it's important to have a week number one as well). Of course, if any of these have already been inserted, we can't change them. So when we fetch a USK, we can fetch the year, month and day hints (and if we are having difficulty, maybe try last year, last month etc), and quickly get a rough idea of the latest edition - and then fetch from there. This will prevent us from getting so far out of date that we can't find the latest edition, or can't find it quickly enough. It is also exactly what Freetalk is missing for its polling. I have not yet started implementing hierarchical date based hints yet, as there is much refactoring and bug fixing to do first on USKs, but it will be done soon as it is essential for Freetalk to work efficiently, and would significantly improve Freenet for end users. I am already very close to fixing the problems with multiple subscribers hinting at different editions, which took quite a lot of refactoring to deal with.

Local situation

I am still waiting for the NHS to contact me to organise a date for my next lot of chest tests (spirometry); guess they can't be that urgent. Remaining health problems seem to be improving; I've learned a lot more about such things than I've ever really wanted to, but seem to be getting there.

Computing wise, Servalan continues to randomly pause the whole UI for annoyingly long periods (a minute or so?). This doesn't appear to be a swapping problem. I've added the linux Compressed Cache driver, which is a fascinating piece of code; I've always been of the view that correctly applied, compression should be able to improve performance rather than just available space (IMHO this is true of filesystems as well, for stuff that doesn't need a lot of within-a-huge-file seeking; see Reiser4 cryptcompress plugin, BTRFS compression support). In any case I've had pauses even when I'm not in swap, or only marginally in swap, although now I have this fairly scary picture:

OrigDataSize:    3507080 kB
ComprDataSize:    820801 kB
MemUsedTotal:    1111584 kB

Considering that the system has 8GB of RAM, a fast disk for freenet logs, source code, root and swap, and RAID1 on slow big disks for the rest, this sort of thing really shouldn't happen or be necessary. Aggravating factors include a MythTV backend (and the requisite mysql), my rather excessive use of tabbed browsing in Firefox, Eclipse, Freenet with a 1GB memory allocation for aforementioned library/spider indexing tests, KDE 3.5, and so on. As I understand it, the pausing is caused by heavy backlogs of disk I/O; the vast bulk of disk writes, and a significant proportion of total disk I/O, is down to Freenet logfiles, so it may help to move them so that only logfiles are on the fast disk... Although RAM appears not to be the immediate problem, I have repeatedly pondered getting 16GB of RAM ... but it really isn't affordable, certainly not at the moment. The new AMD Phenom 2 X6's look very impressive too, but I don't think that CPU horsepower is the major problem here at the moment. My brother recently bought one, discovered it won't work with his AM2+ motherboard, and got the somewhat familiar I-can't-unlock-the-chip-without-taking-the-fan-off-and-I-can't-take-the-fan-off-without-pulling-the-chip-out RMA generator design flaw...

Hardware

Had an interesting discussion with saces (who is working on making Freenet work with OSGi so we can have proper plugin dependancies, and who has a reprap), featuring various interesting hardware; he was hoping to use Powered USB to connect to his sealed rooftop freifunk hub, but it looks like the cable length wouldn't be acceptable even if he could find the connectors. Which brings me to a few pieces of hardware that if mass produced cheaply could have a major disruptive impact. I've talked about this before, just as much of what I've said above partially duplicates past musings, but ideas evolve over time, I'm not only repeating myself. Anyway, my 3 favourites at the moment:

  1. Cheap roof-mountable fourier antenna arrays - The problem with Freifunk, and infrastructureless community wireless meshes in general (as opposed to centralised tiered structures such as Bristol Wireless), is that a mesh of omnidirectional aerials means everything has to be repeated ad infinitum, over a shared medium that everyone is listening to, colliding with itself over and over again. Even before you start worrying about scalability of ad hoc routing algorithms due to topology update broadcasts, route finding and so on, you've already cut your bandwidth by a dramatic factor. What we need is rooftop antennas - because they have a longer range - and directional antennas pointing to several of the other rooftop nodes. The problem is, directionals are expensive to buy, install, re-point when the wind blows stuff around etc, and every so often you lose a node when somebody moves house etc. In other words, it's not practical - it's far less practical than a purely omni-based community network in the freifunk model. If you only ever need connectivity to the local node, this isn't an issue, but if you want ubiquitous, high bandwidth, community owned infrastructure where a lot of the nodes get their backhaul through the network, it matters. The ideal solution then would be a single box which can be mounted on a rooftop, which includes sector aerials for local coverage, and multiple, automatically adjusted directional aerials which seek out and connect to nearby nodes for backhaul. Of course this will not be cheap this side of universal nanoconstructors; commercial kit that does this sort of thing costs thousands. Now, a fourier antenna array is an array of antennas which can disentangle incoming signals from any direction. They are increasingly used for specialist forms of radio astronomy where you need to watch the whole sky at once. With much better technology they could actually be used for shorter wavelengths (e.g. light!), but that's a long way off. They can also be used backwards, as I understand it - they can emulate an omnidirectional broadcast, a semi-directional "sector" aerial, or even a reasonably directional signal. So, to change the world, to take back the internet infrastructure, to build the user-owned decentralised wireless utopia that so many have dreamed of, what we need is a cheap roof-mountable fourier array antenna, with appropriate routing hardware and software, and a fast downlink to the wireless router inside the house. This is a logical evolution of the current indoor MIMO stuff, but using a lot more antennas; I guess the real question is 1) is the processing power or the antennas the big cost? if the former, Moore's law (and custom fabrication) eventually solve all problems, and 2) will there be cheap outdoor versions with automatic, efficient, directional meshing? Oh and 3) is wireless dead in the UK after the Digital Economy Bill anyway? In more friendly regimes, there are ISPs who help you to install a rooftop wireless connection, usually providing backhaul over DSL and roaming via the wireless; maybe they will help to bring this about. Or maybe it'll always be too expensive, or maybe we actually need the sorts of speeds that fiber can deliver and wireless can't. We'll see.
  2. Cheap indoor filesharing capable routers - Increasingly people don't have desktops, they have phones and laptops. And their laptops don't run all the time - they go to sleep, get carried around, they move from one area of poor (expensive, throttled) network connectivity to another, and so on. This is far from ideal for filesharing: Even bittorrent doesn't work well if you have no seeds! Sure the really popular stuff is available at speeds approaching link capacity, but if you want big, unpopular files, you have to wait. It's even worse for Freenet: Darknets are likely not to work at all unless there is a reasonable uptime, and even on opennet, with big datastores, content going offline is a leading cause of loss of content. We can increase redundancy and so on to improve on this, but it is likely that the amount of content available if you are prepared to wait a few days is rather larger - either on Freenet or Bittorrent - than the amount available "instantly". So what we need is a router, with wired and wireless connectivity, low power and fanless, into which you can plug a portable hard disk (or flash drive), with enough memory to run Freenet and Bittorrent. It could also do the "home server" thing - a repository of files of interest to the household when others' laptops are away - which there was a good deal of hype about not long ago, and plug into the TV to play media files, either from filesharing or from other people's laptops. Ideally, if hardware is no object (remember Moore's Law? How much is 100 million transistors and 512MB of RAM going to cost in 5 years' time?), it could run MythTV as well. In the long term, it could also support sneakernet segments of Freenet: UWB to rendezvous with mobile devices and transfer their data to mass storage, or USB ports to fill up sticks for exchange with friends for more literal sneakernet. In any case, if such a thing was mass produced, and popular, it would make filesharing in general and Freenet in particular a lot more viable, and thus inevitably result in lawsuits. The recent fashion for applets might help here: Build a powerful router, let the user install the by-then-illegal software! Of course only the really geeky users would - but that might be all that's needed.
  3. Sneakernet over phones - Unrestricted phones (that is, phones that you can install anything on without political or Trusted Computing interference i.e. not made by Apple), with Ultra-Wideband (UWB) support (aka very fast very short range wireless networking), and enough RAM, storage and horsepower (RAM is the largest issue IMHO) to run something resembling a Freenet node. When you meet up with your friends in a pub, your phone automatically realises that your friend's phone is nearby, and does a UWB transfer of data you want and data he wants. The opennet equivalent of this is Haggle, which essentially broadcasts requests for content to everyone with a nearby device (on the theory that it's hard to track down the requestor in real time; I'm a little skeptical). The Freenet version is to only exchange data with your friends, but that data is routed. Hopefully there will still be some fixed links, whether the open internet, covert wifi/optical links, or whatever; these can run off your home router, which also automatically rendezvous's. Two kinds of data are important here: First is requests for specific files, which are routed, and can take some time to return, but function similarly to Freenet requests - only they take longer, but take advantage of high bandwidth when available (the bandwidth of a 16GB USB stick exchanged daily is 2Mbps per peer bidi...). The second is publish/subscribe: Message boards, blog updates, broadcasts of all kinds, can be routed efficiently. Arguably this wouldn't be Freenet because you'd want big block sizes and so on; IMHO Freenet is not defined by its block size, and I can see ways to avoid many of the problems related to that (e.g. by including the metadata in the indexes where you found the file). Anyway, a future, friend-to-friend, routed sneakernet network would likely use a fair bit of Freenet technology and ideas, and it could run over phones with fast short-range connectivity, or over swapping USB sticks. How likely is this? IMHO UWB is likely - they will need it for Wireless USB, and a lot of location-aware stuff should benefit from it. Also, enough RAM and horsepower is likely to be reasonably cheap in the not too distant future, again because of Moore's law. And unrestricted? That's the big one. The only guarantee that a phone remains unrestricted is root access. The N900 and some Android phones are officially "rootable"; Symbian includes an RTOS built for phones without a dedicated comms processor, so Symbian phones are never rootable. IMHO long-term it will be cheaper to have a dedicated comms processor than to use an RTOS to avoid having one. That leaves the question of market/consumer pressure versus monopolist pressure. We'll see, but it looks a lot brighter now than it did 5 years ago.
  4. And finally...

    I have recently been conversing with this blogger, whom I will call Ted. Particularly on this post. Initially Ted's argument was that science is useless to religious people (because any observations that don't make sense must be miracles) and that religion is useless to scientifically minded people (for all the usual anti-miraculous reasons). IMHO I have debunked the first point adequately: Just because occasionally God does miracles does not mean that the reason your experiment isn't doing what you think it should is because of miraculous intervention! The second point goes straight into apologetics: Is it possible to argue rationally from evidence to the divine? The main objection in principle as I see it is that miracles are so extraordinary that you can require any evidential threshold you like, taking the view that any implausible unsubstantiated absurdity is more likely than the proposed miracle. But extraordinary things do happen - many have happened in the last 10 years, the last 100 years. How do we deal with this? Of course this is without actually investigating any alleged evidence. I know far less about historical apologetics than I should, although I have some idea about scientific apologetics - universal parameter tweaking versus inaccessible multiverse etc, but I have recently started to have a look at a rather thorough book on the topic (The Resurrection of the Son of God by NT Wright)... Have a look anyway! Most of the debate centers on interpretation of the consistency principle.

    2010/05/17

    Of professional ethics

    It has recently become clear that joining the ACM as a professional member has some interesting benefits and is relatively cheap - especially as I'm not a graduate and may not be for some time. However, their Code of Ethics implies respect for and adherence to software patents:

    "Violation of copyrights, patents, trade secrets and the terms of license agreements is prohibited by law in most circumstances. Even when software is not so protected, such violations are contrary to professional behavior. Copies of software should be made only with proper authorization. Unauthorized duplication of materials must not be condoned."

    I despise copyright infringement, in virtually all cases (there are exceptional cases where there is an overwhelming public interest, see e.g. scientology). Casual copyright infringement undermines attempts to build a sustainable future - free software, free culture, truly free music (e.g. jamendo, magnatune is better than half way), and so on. However, software patents are entirely another matter. As the FFII has extensively documented (some of the more ludicrous examples), even those patents which are likely to be held as valid, being technical (which lately means "can be implemented with technical implements such as pen and paper", see the Hitachi decision), new and non-obvious (both of which are interpreted by the EPO as weasel words), are frequently either unavoidable or the inevitable solution to a small everyday problem which is discovered in parallel every time some programmer tries to solve that particular problem. It is not practical for me to read every patent, especially as they are written in legalese and many of them are so broad as to be invalid, or there is prior art; and only implementing algorithms I have created myself is no defence.

    Practical, shipping software will inevitably violate many patents. No small company which ships software (as opposed to a patent troll) can ever successfully sue a large company owning a lot of patents: they are forced to cross-license, and the big companies usually cross-license voluntarily to avoid mutually assured destruction. And open source software has no per-copy revenue from which to pay license fees, even if that were reasonably possible (which it isn't due to the sheer number of patents a real product will infringe on). Software patents must end.

    However, ACM has continued to send me nagging mails to try to get me to join even after I explained this to them. I am interpreting that as they still want me to join even though I disagree with a minor point of their code of ethics. Since it appears to be good value (I occasionally buy papers anyway) and important to professional development, I have joined, and emailed them again with my concerns post-joining and my membership number. I am also documenting this here, and linking it from my home page, to make it as public as possible.

    2010/05/13

    Of indexes, illnesses, and elections

    I've added Now What? to my flog links, as well as evan's freenet stats page. His latest post (concerning aggregation) suggests we really need to merge kurmi's ATOM filter from last year's Summer of Code, and ideally write an RSS filter (the problem with RSS is there are many different formats and they are all rather chaotic; ATOM solves these problems but is less well known). With regards to Freetalk, xor has recently discovered a major bug (lots of stuff that should have been indexed wasn't), and new Freetalk builds should be significantly faster. These will depend on 1248 due to internal API changes. Progress is being made, Freetalk will become semi-official (= listed on the plugins page drop-down) soon, but then we've been saying that for a long time ... At least plugins are shut down properly on node shutdown now, xor has been bugging me about that for ages. As for many simultaneous USK subscriptions, this is certainly somewhere we could improve things; the most immediate priority with USKs is date based edition hints (see bug 150; important for Freetalk in medium term), but we may want to e.g. make sure that if possible we always poll the next few slots sufficiently frequently that we should get notified by ULPRs on a new version; at the moment polling is based on exponential backoff (with limits), so we gradually try more and more future editions and poll the obvious next few less frequently - using a lot of request slots for little gain, if we are sure we are already fairly close to the current head; this is less than perfect for this sort of application. Also, exponential probing (as opposed to increasing the number of slots probed slowly) might help, but given that Freenet is unreliable, it's far from a simple binary search. See bug 4139 and 4046. Oh, and yes, embedded Freetalk forums in flogs is planned. You can already embed a search engine, FlogHelper does this.

    With regards to default bookmark indexes, our policy officially is that we decide which indexes to link to on the basis of how useful they are to a user (probably a newbie) trying to find stuff on Freenet. IMHO good labelling and categorisation make an index much more useful to most newbie users, who are not familiar with the endless acronyms for "excrement". This was why we dropped Another Index - there was a child porn site under the Indexes category with no warning about its content, no text, absolutely nothing to indicate what it was, except the name, which newbies would not have been able to decode. We cannot refuse to link to an index just because it contains links to objectionable content - however, we can refuse to link to it if 1) it contains objectionable content (e.g. grossly offensive activelinks), 2) it is publicly writable (e.g. The Public Index) and therefore vulnerable to vandalism, or 3) it doesn't adequately label content and therefore is likely to scare away and put off new users.

    Evan's work on LDPC codes or cross-segment redundancy should radically improve the reliability of big files. This is well up the todo list, although it may involve significant source code and data structure changes, and evan doesn't seem to be making rapid progress on a good LDPC algorithm. Selective reinserts at a much lower and more precise level than suggested are of course possible and will eventually be implemented as many filesharing tools will want them; however, all predictable inserts are inherently very dangerous, so I am generally rather skeptical about insert-on-demand and reinsert-on-demand solving all our filesharing problems. IMHO there is room for radical improvements to Freenet's current rather poor data persistence - at the FEC redundancy level, but also at the routing/network/block storage level. Much work has been done on this and much more needs to be done; I will be implementing some tester code soon that should throw a little more light on it (specifically, whether bug 3639 has made much difference, and whether triple-inserted blocks still have vastly higher persistence rates). Of course my strategic point of view doesn't mean I have anything against higher level filesharing tools based on kludgy solutions and reinsert on demand, although IMHO they should make the risks clear to the user; if such tools are plugins they could well be official plugins, on the plugins page and maybe even bundled if they have a sensible architecture and good code. As always, a serious strategic issue is churn: If we have 100,000 new users but they have on average very low uptime, most data won't be readily available unless we have a lot of redundancy. However, current statistics aren't as bad as feared, and opennet bootstrapping is much faster than it used to be so new opennet nodes become useful much more quickly.

    Two other serious problems with big file filesharing at the moment: Firstly, if you have a big queue, Freenet really thrashes your disk etc, often slowing down your whole computer. We have done some optimisation work this year, and there is a lot more we could do; this is IMHO a reasonably high priority. Secondly, how exactly do you find content? We need a good, scalable, spam-proof search system. One of our Summer of Code students should be implementing Web-of-Trust based distributed searching. This will be based on the new index format - a big piece of infrastructure developed in last year's Summer of Code, which I am now deploying.

    My current focus is to get the new index format working with XMLSpider. The old index format is not scalable, not forkable, and not incrementally updatable. The new index format is all of these things and with some small changes should be really fast as well as fairly robust against losing keys. Without the planned changes it should still be a lot faster on the client side than the old format, and as I mentioned it will be inserted incrementally, elimating the current week-long disk-thrashing index generation.

    In detail, if you load recent versions (not yet published jars pending ongoing debugging) of XMLSpider and Library, and configure a maximum buffer size for new format indexes, XMLSpider will periodically send a load of data to Library, which it will merge into an on-Freenet new format index. At the moment it's a bit slow, but it is likely that once the spider has got through your datastore, index updating won't be the bottleneck. I've also written about this on the mailing list. Note that while I am of course testing this, I cannot release an index myself, for legal and practical reasons; I suggest that index inserters, people like wAnnA who should take their anonymity very seriously, should start to look into the new code soon - at least once I've got enough bugs out to put out an official release. One other change - XMLSpider now supports a keyword blacklist as well as an extensions blacklist. It defaults to blocking no keywords, but you can specify keywords to block if you so desire. This is keywords in freenet URI's, not in the content of the pages, and thus is a very crude instrument; I have no intention to "improve" it, it's just a quick hack so I don't see my node pulling sites that are obviously Really Bad Stuff while testing. Note also that legally speaking by blocking anything you may make yourself liable for what you don't block; read the EFF's advice to peer to peer devs if interested.

    One other issue with bittorrent is that many torrents don't have any seeds; if we can improve Freenet's data persistence enough we can outperform it for rare stuff IMHO. Of course realistically Freenet is going to be relatively slow for some time to come, even if we implement e.g. the proposed changes to load management to make it more bursty and adapt to local conditions better. On the upside, upstream bandwidth (at least in rich countries) is rising relatively rapidly - I finally went from 8/0.5 to 20/1.2 recently and it makes a big difference (note that Scarborough is a small tourist-oriented town and hub for nearby semi-rural and rural areas), and there is work going on nationwide to deploy FTTC with up to 40/10, which will likely eventually be upgraded to FTTH at 100/100 or more. How Freenet will work in such an environment is something I look forward to exploring; but it is important that it scales down to basic broadband for hostile environments.

    With regards to the legal and strategic issues, the principle of peer to peer filesharing is sound, and not inherently illegal or immoral. Personally I find a wide range of content objectionable: Illegally sharing copyrighted material (apart from exceptional cases of corporate, cultist etc abuse) is objectionable because it undermines attempts to build a better alternative; and I have my reasons for objecting to pornography in general (but of course child porn in particular). However, filesharing in general is not used only for these things, any more than Freenet is. And it's not so uncommon nowadays that politically sensitive materials can be quite large - email archives, suppressed video, wikileaks archives, and so on, can be largish. Thus it is important that Freenet work well for such things. One of our Summer of Code projects will implement a filter for Ogg Theora so that videos can be downloaded safely through Freenet, for example, and then there's the work above about making big files work better.

    Legally speaking, even the (previous) UK government admitted that peer to peer is a legitimate technology and area of innovation; in a white paper it explained that sacrificing the telecomms industry to save the publishing industry is pointless as they're about the same size. Of course, then it went and implemented a three strikes law, which will take effect in approximately one year. In France, Freenet is much larger than it was before Hadopi (another three strikes law), so it's quite possible that Freenet will gain significantly from such measures in the UK. Hopefully it will be quite some time before anyone tries to attack Freenet (or anonymous peer to peer in general) specifically, but if and when they do, darknet provides a reasonably robust solution, provided that 1) the density and distribution of would-be users is such that you can actually find people to peer with, and 2) the government doesn't implement politically or technically expensive options such as traffic flow analysis, blocking all peer to peer comms etc. Even in the latter case, e.g. in Cuba (no legal internet connectivity), Iran (willing to tolerate massive disruption to everyday users' lives, mostly dial-up rather than broadband), etc, sneakernet-based routed peer to peer (which may draw significant chunks of technology from Freenet) should have a future; IMHO Haggle is great, but that sort of public, opportunistic model is not going to work in many hostile environments.

    Near future priorities (for me and Freenet development more broadly):

    • New index format, freesite searching, etc - At the moment I am planning to do the minimum required to make new format indexes feasible and reasonably easy. The SoC student is working on distributed searching, so I could do more work, e.g. on the various Library bugs, or the proposed format changes. We'll see. Good fast search, with up to date indexes, is surely vital for making Freenet more usable and useful and retaining newbies.
    • Windows installer - There are various important changes in a branch, includng automatic memory limits configuration (which is already implemented on Linux/FreeBSD, but continues to be a PITA on OS/X). These will need testing, I might release a tester build. Beyond that, we really need to fix the infamous service does not respond to signal bug (which is sadly not specific either to Vista or 64-bit); the next step is for Zero3 to implement a version of the installer that doesn't create a service, and just runs Freenet on login as the user who installed it.
    • Freetalk, WoT and FlogHelper - All of these should be made semi-official soon. This is dependant on xor as always, but he seems to be making significant progress. I recently fixed a related bug in fred which he'd been nagging me about for ages: Plugins are now explicitly shut down when the node is shut down, this is essential to prevent data loss for plugins which use a database.
    • Decide what to do about Freemail - There has been some work on this recently, including a pull request I haven't attended to. If we want to maintain it as an official plugin, and distribute jars, I will have some back-reviewing to do, and there are some serious bugs that need fixing; it works very badly for me. I have heard a few theories as to why, so it might be fixable; and then we have to consider Freetalk integration.
    • Data persistence work - Mostly this is investigative at the moment, writing tests etc. Long term, bloom filter sharing, tweaks to caching policy, tweaks to the insert logic, and so on, may be important.
    • LDPC or cross-segment redundancy - This is IMHO very important in the medium term. It will be a significant amount of work, but radically improve data persistence for large files. A shorter-term tweak is splitting segments evenly.
    • Local performance of filesharing - Right now with a big download queue (let alone uploads), Freenet is very heavy on the system, particularly on disk I/O. We can dramatically improve on this. Will be quite a bit of work, but worth it in the medium term.
    • Better USK support - Several issues mentioned above, this could be important once Freetalk is more widely deployed, as well as being important for other areas.
    • General bugfixing and usability work - There are lots of small bugs, many of them simple usability tweaks. We should fix some of them.

    When will we release 0.8.0? I don't know. At the moment the financial situation isn't critical - FPI just sent me $6000 (international transfers are expensive so I get paid infrequently), and have about $3400 left after the 2.5 months (modulo me feeling well) it will take me to use that up. IMHO improving Freenet is probably more important than declaring a specific version as stable and getting some press for it. I do think that we should if possible solve several of the above issues before any stabilisation-and-publicity-release period. But it's up to Ian. IMHO the only things that are absolute blockers are Freetalk and fixing the wininstaller, but several of the other things I've mentioned above would bring such an improvement in Freenet's usability that IMHO they take precedence over last minute bug fixing.

    Meanwhile, the UK, where I live, has a new government, a liberal-conservative coalition with an impressive list of agreed policies. It appears that much of the liberal agenda on fairness and climate will be implemented, and it preserves the best bits from the tory manifesto (two interesting tory environment policies which if done right could make a big difference: carbon floor price and bribes for local authorities allowing wind farms). Of course, the long overdue referendum on AV probably isn't winnable, and the big question will be where does the axe fall - Tory instincts are naturally to tax the majority and squeeze the poor, but we'll start to get some idea what the priorities of the new government are soon. Spending matters, no matter how good the other policies. For example, only the liberals committed to climate adaptation funding being additional to aid - a commitment that will surely fall by the wayside now, despite their controlling Energy and Climate Change, but that is a probable deal-breaker globally as well as being a key issue of global fairness - if you believe there is any chance of a deal in the first place. One interesting question is whether the blatent upper middle class subsidy of higher rate pensions relief will survive - it's quite a lot of money and it's not the rich who need to be encouraged to save. It's not mentioned one way or another in the coalition document, but posting all the bad news at once is probably a bad idea. Capital gains tax reform is mentioned but the wording may allow for a massive discount for shares, so it may not mean CEOs paid in options paying their fair share after all (relatively small amount of money involved here either way). Anyway, VAT increases etc would be very unpopular (whereas higher rate taxpayers are unlikely to defect to Labour), and real terms spending cuts will be very difficult. I don't see why the coalition can't last for a reasonable time, given that neither party can easily force an election at a moment of weakness for the other - but it will be difficult to keep the back-benchers in line. Whereas if it had been a weak tory majority government, this would have been just as bad, except all the back-benchers would have been tories, and most of them (according to various polls) climate-denying pro-hunting anti-poor nasties. The liberal-tory pact may well be the best realistic outcome we could have hoped for; I voted Labour in the hopes of such a result.

    In other political news, there is an outside chance of the EU adopting the 30% cuts by 2020 target. This is essential - the 20% target combined with the recession and the "flexibility mechanisms" (offsetting, over-generous free industrial allocations subsidised by the harder-to-decarbonise electricity generators' customers, banking of unused permits) mean very little meaningful action will be taken if we don't move to the higher target, and the 30% target would in fact cost less now than the original forecasts for the 20% target. Leadership sometimes means acting even though others aren't, or in anticipation of others acting. It is high time we showed some - Europe are supposed to be the good guys relatively speaking, but that is rapidly being eroded by their own inaction. The events of the recent election show that sometimes, surprising things do happen; so again, we'll see.

    Health-wise, I've just had two courses of antibiotics and a chest X-ray, and have problems with both my respiratory and digestive systems; whether the current issues are related to the problems at Copenhagen is unclear, but for any brothers reading this, praying for me to be well and able to work at full speed would be helpful. I haven't been working at full speed for much of this year - not that I haven't done any work, but between illness, worry about illness, and other distractions, I've not worked as much as I should.

    Meanwhile my computer (named Servalan after a certain female sci-fi baddie) is periodically stalling for no apparent reason ... except that it's nearly 4GB into swap, partly because of testing indexing (but it was 2G into swap and still sticking occasionally even before that). 8GB ought to be enough for almost anything... apparently it isn't any more.

    2010/03/18

    Have a break, and opennet

    This video was taken down from YouTube due to a copyright complaint from Nestle. Recently I have eaten a few kitkats after they became fairtrade, but sadly no more, at least for now, as they are apparently buying palm oil from deforested areas. The video is available on Vimeo and now on Freenet.

    In other news, there has been a lot of work on opennet recently. This divides into three categories:

    • Major bug fixes, e.g. that the grace period wasn't working at all
    • Enforcing fairness between connection types, to avoid path folding crowding out announcements or vice versa
    • Increasing connection churn, for faster bootstrapping

    The intention is to dramatically speed up bootstrapping, and so far it seems to be working, bootstrapping to 6 peers in 0:46 today. Faster reintegration will mean that data goes where it should and therefore should persist longer, it will mean happier users, and so on. On the other hand it might mean more casual users, in which case we will have to increase redundancy to maintain data persistence. We don't maintain very much state for connections, and connection setup isn't that heavy (3K or so), so the connection churn shouldn't be a problem, but I have emailed the senior theorists - but when I've asked them before connection churn hasn't been a big issue.

    The next step is to investigate data retention further. Recently we discovered that blocks inserted 3 times have 90%+ retention after a week versus blocks inserted once have 70%+. This is unexpected as it is likely that the data has fallen out of the cache after a week, but not out of the stores, which should be the same for all 3 inserts. It may be that this bug (not caching for a few hops resulting in missing the places where the data should be stored) is part of the reason why. So having fixed the bug, we can now test with and without the fix, with single inserts and triple inserts (as it is configurable per-request), to see what the difference is. If there is still a big difference between single insert and triple insert more work will be needed to figure out why and whether it can be reproduced without actually inserting stuff 3 times.

    Meanwhile it is just possible that the infamous Windows installer Service did not respond to signal" bug has been fixed with the latest installers, by a simple fix relating to detecting where Java is installed. If you can reproduce this bug then please contact us, we really need help with it - primarily people who are able to reproduce the bug at will and test fixes, contact me or Zero3. We may yet have to take the extreme solution of installing Freenet to run on login rather than on startup as a service, which would be a fair amount of work, would cost us a little time on login, and wouldn't work well for servers, but has a number of advantages.

    A new version of WoT is available, and Freetalk will soon follow, although in the near future both will be reset i.e. the context name will change so all old identities, boards and threads will become inaccessible - but the upside of this is WoT and hopefully Freetalk will be added (as betas) to the official plugins list and easily loadable from the plugins page. There is a bug related to leaking USK handlers that affects Freetalk quite badly, which I will investigate soon.

    Evan is still working on making large splitfiles more reliable. At the moment, we divide data into 32KB blocks, and then we group those blocks into segments of 128 (= 4MB of data). We encode each segment with a Vandermonde FEC code, creating 128 "check blocks", so that we can reconstruct the original data blocks (and all the check blocks too) with any 128 data or check blocks. The big problem is that big splitfiles may consist of a lot of such segments, and since there is no redundancy between them the odds are that one or more of these segments will fail. The less big problem is that Vandermonde codes, while being "perfect" in that they don't need any more blocks than the original count of data blocks, use a lot of CPU and memory and/or disk seeks, and that this gets a lot worse if we increase the segment size beyond 128 data + 128 check blocks.

    The first solution Evan investigated was two level segments: On top of the ordinary "outer" segments we would add a layer of "inner" segments, which involve randomly chosen blocks from across the outer segments. This would dramatically improve retrievability for a given block success rate. This still uses Vandermonde codes, and has its own eventual limitations, however it would not be difficult to implement. The second solution is LDPC codes. These would, for larger files, give an even better success rate, although they need a few extra blocks so for small files it may be better to continue to use regular segments. Plus the CPU usage is very low, and blocks are decoded in small groups of no more than 5 (so no heavy memory usage or seeking), which can happen partially progressively, although most of the decoding will still have to happen near the end. Evan is still searching for a good algorithm for selecting the blocks so he can simulate this and compare it to the other schemes. If LDPC codes are the big improvement that it looks like they will be, they will very likely be implemented before 0.8, because either scheme could make a huge difference to the retrievability of big files.

    Sashee has been trying to get the web-pushing branch merged for a while, and hopefully it will go in soon (disabled by a config option by default, as it is still quite buggy). This will make it easier to maintain and allow sashee to reintroduce the stuff he took out to make merging easier. There has been some interest in this recently from benaroia on IRC, hopefully he will help us to further improve it. The web-pushing branch essentially uses AJAX to update various elements of the web interface in-place, rather than having to refresh or reload pages: The progress bar when loading a page, the notices at the top of the various generated pages, the content of various generated pages (e.g. stats, connections, etc), and most impressive (when it works), loading lots of inline images on a freesite.

    Oh and we got slashdotted on Saturday. A PC Pro article very similar to the Guardian article last year. This seems to have got us a few more users.

    2010/02/04

    Status update (February 2010)

    Time for a status update...

    BUILD 1240

    Our last stable build, 1239, was in November. We have just released a new one, 1240. This has many changes (opennet stuff, optimisations, all sorts of stuff), which I list in the mail about it. One of the most important is that there are several new seednodes, and many dead ones have been removed. I have tested it 3 times today and it's bootstrapped fast each time, although yesterday it bootstrapped very slowly one time.

    NETWORK STATUS AND NETWORK STATISTICS

    Evan Daniel has been doing some useful work analysing the network. Amongst other things, he has discovered that:

    • The Guardian article, in December, which was reprinted around the world, has more than doubled the size of our network, although there is a slight downward trend now. This may be due to seednodes issues and not having had a build since November.
    • We have around 4500-7000 nodes online at any given time.
    • Over 5 days, we have around 14000 non-transient nodes.
    • For nodes online at any one time, roughly 37% are 24x7 nodes (96% uptime average), 33% are regular users (56% average uptime), and 30% are occasional or newbie nodes (16% average uptime).

    EMU IS DEAD, LONG LIVE OSPREY

    We have finally gotten rid of emu! Our faithful and powerful dedicated server supplied at a discount by Bytemark is no more. We now have a virtual machine called Osprey, which does most of the same job, for a much lower cost, and has a much simplified setup so should be easier to maintain. We have tried to outsource services, for example we use Google Code for our downloads, but some things will have to stay under our direct control for some time to come e.g. mailing lists and the bug tracker.

    You may have some difficulty with the update scripts, if you use update.sh / update.cmd. If it doesn't work, try updating the script manually from https://checksums.freenetproject.org/latest/update.cmd (or update.sh)

    WOT, FREETALK, RELATED THINGS AND OTHER PLUGINS

    Xor (also known as p0s) continues to work on the Web of Trust and Freetalk plugins. These are approaching the point where we can make them loadable from the plugins page, and then bundle them, enabled by default.

    WoT is the backend system which implements a pseudonymous web of trust, which functions in a similar way to that in FMS. You can create identities, assign trust to other identities, announce your identity via CAPTCHAs and so on. This is the Community menu, from which you can see your identities and other people's, and the trust relationships between them. WoT is used by Freetalk, FlogHelper, and probably soon by distributed searching, real time chat and other things.

    Freetalk is a spam-resistant chat system based on WoT. This is similar to FMS, but it will eventually be bundled with Freenet, and will be a part of it by default. You will be able to embed a Freetalk board on your freesite. FlogHelper is a WoT-based plugin for writing a flog (freenet blog), which is very easy to use, but uses WoT to manage identities. I would have bundled FlogHelper months ago, but WoT isn't ready yet and FlogHelper needs it.

    WoT should be ready soon. Recently a major issue has been discovered with the trust calculation algorithm, after that is fixed and some minor issues, WoT will become a semi-official plugin, which will sadly require flushing the existing "testing" web of trust, so sadly all old messages and identities will go away. Freetalk needs more work, about 50% of the bugs marked for 0.1 on the roadmap are fixed at the moment.

    In build 1240, we pull in a new version of Library. This is a great improvement over the old version, it is faster, it supports embedding a search on a freesite, and has many bugs fixed. However searching for common terms can still cause out of memory crashes.

    There is another issue with Library: infinity0 spent last summer creating a scalable index format for Library, which should make it a lot easier to insert and maintain big indexes. We will soon change the spider to use this new format, and in the process we expect to greatly improve performance for writing indexes, so it doesn't take a week any more and is done incrementally. I realise this has been promised before, but it is important, so it will happen sooner or later, hopefully sooner.

    Full Web of Trust-based distributed searching, with a focus on filesharing, is on the distant horizon at the moment. infinity0 might be able to do some work on it as part of his studies, we'll see. It won't be in 0.8.0.

    PRIORITIES AND RELEASES

    We would like to get 0.8 out soon, or at least a beta of 0.8. Several major issues:

    • The windows installer needs to be fixed on 64-bit. This is being worked on.
    • Freetalk must be ready.
    • Auto-configuration of memory limits in the installers, and asking the user about memory usage (at least in some cases) is relatively easy and important, but not vital.
    • Substantial improvements to opennet, particularly making nodes announce onto the network and get where they should be as quickly as possible.
    • Substantial improvements to data persistence. We have done much here already but there is more to do.
    • Library must work well and fast out of the box. This means amongst other things the new spider mentioned above.
    • MANY BUG FIXES! The first beta does not need to be perfect, but there are some critical issues that need dealing with, such as the fact that nodes often don't resume properly after being suspended for a while.

    Please test Freenet, and report any bugs and usability issues you find on the bug tracker or via Freetalk board en.freenet (note that this will be wiped soon so if after a new Freetalk release it is wiped you may need to resend).

    OPENNET IMPROVEMENTS

    We have many ideas on how to improve opennet bootstrapping (make nodes assimilate into the network more quickly), and to improve opennet generally. Some of these are implemented in 1240, including many bugfixes. More will be put out over time so we can see their impact. Improving opennet should improve performance for the majority of users who don't run 24x7 and it should improve performance for everyone else too, as those nodes will get connected and start doing useful work more quickly.

    DATA PERSISTENCE

    We have many ideas on how to improve data persistence. There is a lot of capacity on the network, yet data seems to become inaccessible quite quickly (stats below). I am convinced that improving data persistence will improve Freenet's usability and perceived performance immensely. The continued popularity of insert on demand on uservoice demonstrates this as much as anything: People want a system that works! IMHO we can greatly improve things without resorting to insert on demand, although filesharing clients based on distributed searching may eventually offer it (but there are serious security issues with insert on demand).

    Evan is convinced that mostly poor data persistence is not due to data falling out of stores, but due to the small number of nodes that stored the data (as opposed to caching it) going offline or becoming unreachable. We have increased the number of nodes that store data, we have made the node use the store for caching if there is free space, we have done various things aimed at improving data persistence, and there is much more we can do. An immediate question is whether the security improvements gained last year by not caching at high HTL have broken many inserts by making them not get cached on the right nodes; we will test this in 1241. A related question is why inserting the same key 3 times gives such a huge performance gain relative to inserting it once; we will investigate this soon after. We will probably triple-insert the top blocks of splitfiles soonish, but the bigger prize is to achieve the 90%+ success after a week that we see with triple-insertion of a single block, and this may well be possible with some changes to how inserts work...

    Finally, the redundancy in the client layer could be a lot smarter: We divide files up into groups of 128 blocks, called segments, and then add another 128 "check blocks" for redundancy. Unfortunately this means that sometimes the last segment only has 1 block and 1 check block, and so is much less reliable than the rest of the splitfile. We will fix this.

    We have been collecting statistics on data retrievability over time. The below are "worst case" in that they relate to single CHK blocks, with no retries. Real life, with many retries (at least 2 for a direct fetch and more if the file is queued), and with large, redundant splitfiles, should be substantially better than these numbers. Every day we insert 32 blocks and fetch a bunch of 32 blocks from 1 day ago, 3 days ago, 7 days ago, etc. There are two of these running to get more data, so I am just showing both results here. The percentages are the proportion of the original insert that is still retrievable:

    1 day76% / 77%
    3 days66% / 70%
    7 days60% / 61%
    15 days48% / 48%
    31 days36% / 33%
    63 days21% / 19%

    Now, here's an interesting one. In each case we insert a 64KB CHK splitfile - that is, one block at the top and four underneath it. We insert one three times, and we insert three different ones once each. We then pull them after a week. We can therefore compare success rates for a single block inserted once, a single block inserted 3 times, and a simulated MHK, that is, a block which has been re-encoded into 3 blocks so that we fetch all of them and if any of them succeeds we can regenerate the others.

    Total attempts where insert succeeded and fetch executed: 63

    Single keys succeeded: 61

    MHKs succeeded: 58

    Single key individual fetches: 189

    Single key individual fetches succeeded: 141

    Success rate for individual keys (from MHK inserts): 0.746031746031746

    Success rate for the single key triple inserted: 0.9682539682539683

    Success rate for the MHK (success = any of the 3 different keys worked): 0.9206349206349206

    USER INTERFACE AND USABILITY

    Ian's friend pupok is working on a new AJAXy user interface mockup for Freenet. sashee's web-pushing branch, which makes the user interface a lot more dynamic without making it look much difference, should be merged soon, but turned off by default, since it has some nasty bugs. When it is turned on, it solves the age-old parallel connections bug, showing individual progress for each image without hogging your browser's limited number of connections (6 or 8 on modern browsers). Both of these may miss 0.8.

    More broadly on usability, usability testing is always welcome: Persuade a friend to install Freenet, watch them do it, don't help them unless they get really stuck, report any problems they have or any comments they make about how it could be better.

    2009/12/24

    Copenhagen and Christmas

    Merry Christmas! I have taken a good deal of time off lately because of being in London and Copenhagen, and because of getting a nasty stomach bug while there. I am taking some more time off for Christmas; more on Copenhagen below, the rest of this paragraph is work stuff. It will thus be a while before there is a new build out, but when there is hopefully it will include the WoT plugin at least as an easily loadable option on the plugins page, and maybe Freetalk as well. It will also include the first stage of some important opennet improvements. Evanbd has been doing some impressive work which for the first time means we are not completely in the dark: We know the network has around 15k users over a week, excluding all those who try it once and then leave; we know that the latter group is relatively small, as is the group of 24x7 users, and we know that due to the Guardian article and all the echoes of it across the world, the network size has more than doubled. Online at any given time are around 5-7k users. We seem to be losing some users now, but it is not yet time to panic, or to decide that we gained nothing from the Guardian article; it's possible some of the decline is a Christmas effect (holidays, work computers switched off etc). Anyway, have a look at evanbd's flog or his graphs.

    Now I will post what I wrote up while waiting for the train home, about my misadventures in Copenhagen, with lots of related rants. It's rather long for a blog post but if you want to read it then read it. I will post pictures later. Since I got back I have read most everything that has been written about the outcome of COP15; basically, it wasn't fair ($100B is a lot, possibly not enough, but will it be predictable, will it all be offsets, will it be redistributed from aid budgets?), wasn't ambitious (targets are way behind where they should be), and wasn't binding (well duh). Others will go into much more detail on this.

    First, New Life Copenhagen, who found people to stay with for 5000 activists in Copenhagen, including me (see below), wants some links. Highlights (I didn't take part in any of these events):

    Hunger Strike

    Pledge to not drink Coca-Cola until they clean up their "bucket full of despair" (details within!)

    Ecological Burial Contract (promession)

    New Life Copenhagen trailer

    New Life Copenhagen long trailer

    Friday December 4th:
    Final organising for coach going to The Wave. Panic when discover we have no banners at approx 10:30PM. Not resolved as have to get up early, so we probably won't have any really good press shots.

    Saturday December 5th:
    The Wave! Climate Emergency feeder demo in the morning, me and Scarborough greens there. Main event, huge demonstration in London, organisers say 50-80k people. Including me, 4 Scarborough Green Party folks, and another 4 people on the York coach. Unfortunately we never managed to meet up, but Dilys' relative got a bunch of photos anyway. March ends at Parliament, some Climate Camp folk dig in there with tents. Meet many interesting people. Interesting that the communists are now fully engaged with the issue of climate change. I detest communism, but the whole climate emergency thing may play into their hands - a command economy may be exactly what we need to rid ourselves of fossil fuels as soon as possible and then probably get into some hardcore geoengineering, especially if we leave it too late. It's an extra several hundred folks anyway! Remarkably wide variety of organisations and people.

    Eurostar to Brussels (2 hours). They claim 7.6kg of CO2, against 100kg or so flying the same route (IIRC). There are various disputes over this; the main inputs are concrete (tunnels, stations etc, but see here), and electricity (motive power, lighting and heating stations etc; I guess heating might be gas; Eurostar may not be affected so much by heating but Cologne station has a lot of heating). Eurostar scores particularly well as most of the route is in France or Belgium and therefore nuclear powered, and of course the trains are electric (half the emissions of diesel or better). Sleep in a private room in a hostel in Brussels ("Sleep Well").

    Sunday December 6th:
    TGV to Cologne, takes 2 hours. Apparently Thalys only started using TGVs recently, it will increase their carbon emissions slightly but enable them to poach from airlines, so a big net benefit, especially if Germany continues to decarbonise its electricity supply (they have a lot of domestic solar thanks to large subsidies that our government has recently copied).

    Cologne Cathedral from the outside

    Go up the Dome (cathedral). Unfortunately they don't let you store baggage at the bottom, so this takes quite a while with my enormous rucksack! The view from the top is worth it though.

    When I get back down, I discover a tiny climate demo/rally. Get talking to people and go with them to some sort of spanish center which has been partially co-opted for climate folk. Meet some freegans, who are in my humble opinion completely nuts. Freegan means you never pay for food: you get food from shops where it would have gone to waste, legally or otherwise; a lot of packaged, safe, slightly out of date food gets dumped. Apparently they are strictly vegan, and almost entirely freegan: they buy beer and cigarettes and very occasionally food. Apparently they squat in a forest and have one job between six of them. They are part of the Climate Caravan, coaches/vans travelling across Europe to Copenhagen. Right now coaches are actually cleaner than trains in most cases, although once our power grid and cement production are made "green", trains might beat them.

    Not being a big fan of long road journeys, perhaps hypocritically, I get the train. This is a big sleeper which takes 12 hours to get to Copenhagen, overnight. Other parts of it split off and go to places further afield. Lots of electric trains in the station, we are lagging behind on this, although the sleeper itself appears to be diesel.

    Monday December 7th:
    Arrive in Copenhagen having had a little sleep in the provided 4-bed compartment, and having briefly talked with the three other climate activists, including a Greenpeace representative who is accredited to the Bella center, meaning he gets to sit through governments arguing over punctuation and generally saying a lot of words without making much progress, who also happens to be fasting in solidarity with the Climate Justice Fast hunger strikers who haven't eaten since November the 6th. He's just fasting over the period of the conference and hopes that they will stop after the conference regardless of the outcome, rather than making their point in martyrdom after there is no good deal; the press conference from the beginning suggests that they may stop, but we'll see. I had hoped to join them for one day, but as you will see that wasn't possible. I did however discover a good deal about vomiting and diarrheal diseases, which apparently will increase with climate change (according to a report on third world impacts I've been reading)... UPDATE: the long-term CJF fasters Anna, Sara and Paul ended at 44 days, despite being disappointed with the outcome of COP15 (along with the rest of the world!).

    When I arrive at the metro station nearest my host, I discover a bunch of wicker people and tree stumps. The poster says this is a place for praying for an end to deforestation - kneel on the tree stumps or between the wicker people, or come back at 6PM every day for a larger group. Personally I haven't prayed anywhere near enough over the period of COP15...

    The Tree Hugger Project in Copenhagen

    Met my host, Soren, and his flatmate, Nils. I'm sleeping on their couch. New Life Copenhagen have made an art project out of finding people to stay with for 5000 activists.

    Copenhagen is an interesting city. Flocks of bicycles, with their own traffic lights, everywhere. Wind turbines (and a presumably CHP coal plant) clearly visible from the center, along with the big metropolitan buildings. And as with all capital cities, a good public transport system. District heating (based on waste heat from biomass electricity generation) and soon district cooling as well.

    But one of the most curious parts is Christiania: An autonomous suburb featuring its own quasi-anarchist self-government, open drug dealing (hence no pictures), many dubious buildings, and a 24 hour bakery. Buy some meat pasty thing from the bakery, and then discover the Climate Bottom Forum, an alternative event in a bunch of tents with a giant woman built of trash (PHOTO), focusing on transition towns, eco-villages, spiritual dimensions, indigenous knowledge, the end of capitalism and all sorts of other less than mainstream aspects to the crisis. Start throwing up before I leave.

    Tuesday December 8th:
    At my host's home, not feeling well, but finished throwing up. Talk to a doctor.

    Wednesday December 9th:
    Still not great, but eating a little. Nils gets the bug too, so I don't think it was food poisoning - it was most likely a bug I picked up at The Wave.

    Thursday December 10th:
    Somewhat better, but the standard advice is apparently to wait 3 days after the end of symptoms before going back to work, presumably to avoid spreading it. So I wait. Me and Nils walk down the river by Christiania gawking at the unconventional housing.

    Friday December 11th:
    Go in to Copenhagen, to the Klimaforum09. This is the main alternative summit. The Danish government has paid for the venue, which features a swimming pool, but the talks don't reflect this, with discussions on organic agriculture, alternative economics, geo-engineering, failures of carbon trading and offsetting, more on transition towns and eco-villages, and all sorts of radical stuff. Here it is easy to believe that everything is connected to climate change and climate change is connected to everything. Age of Stupid makes this pretty clear, hopefully you've seen it by now (we've shown it to 150+ people and it was broadcast on Monday). War, capitalism, the pursuit of endless economic growth, consumerism, deforestation, exploitation of poor countries, pollution, cheap flights; our whole way of life is tied up with cheap fossil fuel energy, and the endless quest to protect it seems to be behind many of our biggest wars. The Klimaforum declaration is in this spirit, looking to reject false solutions and transform the society and economy that has been responsible for climate change.

    False solutions include carbon trading, which has some very serious problems and at best is an unproven attempt to minimise the costs to western companies, offsetting, nuclear energy (the Green Grid proposals make it unnecessary, since the wind always blows somewhere in europe), and geoengineering (a distraction, they say, from radical cuts in carbon right now). A core part of the radical argument is that endless economic growth is unsustainable: The planet has finite resources, and buying more and more things, with their built-in obsolescence, is unsustainable, unless we have perfect recycling; and historically, major reductions in carbon output in parallel with economic growth have happened very rarely if at all. In any case I decide not to sign the declaration, mostly because I believe we will probably need geoengineering and nuclear, two major technologies regarded as "false solutions" by klimaforum. In theory we shouldn't need nuclear, as it is very rare that the wind doesn't blow anywhere in europe - link up the electricity grids, build a vast amount of wind, throw in some saharan solar and the norwegian hydro and a tiny amount of biomass and you have a completely renewable and reasonably cheap Green Grid. However this requires that everyone builds wind at the same rate, and there are serious worries about biomass as well as not having the ships we need for deploying offshore wind (of course onshore wind is much cheaper, but we need both), so IMHO we will probably need some nuclear - although it's not a long-term solution as we will eventually run out of uranium.

    Geoengineering is deliberately influencing the climate by either sucking carbon out of the atmosphere (e.g. via artificial trees), or by reducing the amount of sunlight that reaches the earth (e.g. via dumping chemicals in the upper atmosphere, similar to a "nuclear winter"). It's a last resort technology, but the latest science and the gross lack of progress in international negotiations mean we will probably need it. Unfortunately it will be very expensive and very risky, according to a recent Royal Society report. But we need to research it now as we will probably need it to keep a habitable planet, with scientists talking about the real possibility of 6 degrees by 2100 and 4 degrees by 2060 if we don't take sufficient action now (which we appear not to be doing, sadly). Engineering an artificial nuclear winter is not an acceptable alternative to cutting emissions now, but we are probably going to need both. And of course, as it's expensive, it will be deployed at the last minute, and not in time to save the poor countries; and it will affect different countries differently, with possibly severe side-effects...

    Attend a few talks at Klimaforum. One on why interest bearing banking is bad, contributes to short-termism and hence to climate chaos (e.g. deforestation), makes the poor poorer and the rich richer, and so on. The interest free banking folks apparently have a functional bank in Sweden which allows its members to borrow and save, charges a minimal fee, and seems fairly awesome. Next is a session on the Ghanaian perspective which seems to be cancelled. The interest free folks say we can pop in to the after-party but the address leads me to an S&M shop next to a gay bar with no immediate evidence of klimaforum people, so I go home to sleep!

    Saturday December 12th:
    This is it, the real reason I came to Copenhagen. Two marches, the first being Friends of the Earth's The Flood, which is similar to Saturday's The Wave, so blue dress code, with FoE handing out blue waterproofs; we marched from Klimaforum to the local government center, in the morning, in the cold (cold even with my 6 layers!). Then the real protest of the day: a mass rally with a lot of crazy music followed by a march to the Bella Center. This took several hours, with 100,000 people (according to the organisers, 25,000 or 30,000 according to the police) marching four miles and disrupting most of Copenhagen. There were 30 official groupings, surely hundreds of organisations, and while I started with the Friends of the Earth / System Change Not Climate Change group, I ended up spending most of the march between christians of various sorts and communists! When we arrive there is a further rally, with lots of speakers, and the communists try to get us to go back to the beginning so we can get arrested along with the 900 who have been so far. I decide it's a better idea to stay here and listen to the speeches, which are mostly in English. Eventually I go home amidst various placard bonfires. Arriving near my host's place I discover a minor roadblock, and go into a newsagent to find an old man complaining that kids have torched his car, while being very positive about "hippies" in general. I go around the block to get home...

    Didn't get any photos on the demo itself, but you can find plenty on the net, the following is of the place where the big demo started from but was on another day:

    The big demo started from here, but this is from another day

    Sunday December 13th:
    Some local evangelical lutheran churches are running a 24 hour open church, culminating in a "gospel service". This is approximately half in english and half in danish, with enthusiastic gospel singers leading worship, and the sermon (on creation yearning in birth pains for the revealing of righteousness i.e. the church, while affirming the need to steward creation) is in English. Plus they have a ridiculously cheap cafe (by Copenhagen standards). I talk a bit to a local and move on. There is another service in the afternoon but with all the VIPs attending it will be very hard to get a seat. So I wander.

    I misread the Klimaforum schedule and so think there is nothing on; there wasn't much on Saturday, but there's tons today, which I miss. I wander around the center of town, visit the Hopenhagen center in the middle of town, which according to the radical folk is a bastion of corporate greenwash (it's sponsored by Coca-Cola!). Certainly there are substantial corporate presences here, most notably Siemens, who have surprisingly frank set of plastic cubes representing different aspects of your carbon footprint, lots of video on all the amazing climate friendly things they do, and a magazine with a lot more. Watched the video on the High Voltage Direct Current line between Australia and Tasmania, which enables both countries to avoid building more power stations. The same technology will enable europe to supply all its own energy with virtually no non-renewable output. Now, as a big engineering company, Siemens stands to make a lot of money out of decarbonising the economy (which is a good thing), but on the other hand it is involved in fossil fuel energy as well as evidenced by their stuff about slightly better fossil fuel turbines. Ultimately it comes down to what they are saying at the Bella Center. Businesses are lobbying really heavily for their various agendas, often with huge delegations and side-events compared to the NGOs (groups such as Greenpeace or Oxfam campaigning for strong and fair action). I hope that Siemens is in favour of strong action, certainly from the material they showed it makes a lot of sense, but I don't personally know one way or the other.

    Another exhibit is for a low-carbon ship based on natural gas fuel cells. Another is for the local state-owned energy provider, which provides not only electricity but also piped heat (and from the same biomass power plants), making a massive saving in carbon (although there are serious worries about being able to grow enough sustainable biomass without displacing food production and thus causing deforestation; it probably wouldn't work in the UK, although in Denmark wood grown on marginal land is supposedly a big part of the 40% of their electricity they get from biomass). They are getting into district cooling from March, which also looks pretty awesome, with a 70% reduction in carbon relative to conventional air conditioning (as well as less noise and equipment, and cheaper). I continue to wander, around the museum and other places. There is a big display of low-carbon cars and vans at Hopenhagen. Some of them just burn biomass (with the problems I mentioned above) but most are practical electric cars, with good performance but usually limited range. We really need better batteries; air-breathing lithium ion batteries will have the same energy density (=range) as petrol, but the current prototypes ignite on contact with water vapour, so it's likely to be 10 years before that's sorted - and eventually we will run out of lithium. In the meantime, current electric smart-cars are very cheap per mile, although they have limited range and performance; the new electrics (from 2011) will have much better range and very good performance, ranging from plug-in hybrids to improved smartcars to expensive but very good stuff such as the Tesla Model S, which is fast and spacious and options for range from 100 to 300 miles. Infrastructure will be a problem, but hopefully the power companies will pay for most of the charging points, and eventually you'll be able to battery swap at service station. Also have a brief look in one of the museums, and watch a few films by indigenous people affected by the changing weather.

    All in all, a day of rest. That's okay then, although I missed some very interesting talks.

    Monday December 14th:
    Pack. Go to a talk about the failures of carbon trading and carbon offsetting, and how this supposed "Clean Development Mechanism" actually encourages dirty development, harms indigenous peoples, and allows western companies to continue to pollute, while delivering questionable carbon savings. All in the name of minimising the costs of complying with the targets for rich country companies - and even without offsetting the targets are way weaker than they should be. Of course, so far, permits have mostly been given out (rather generously) to polluters rather than sold, resulting in massive and often unearned profits for some of the polluters.

    Go back home to collect my stuff and say bye to Nils. Get lost on the way back, taking the wrong bus, but get the sleeper train on time.

    Tuesday December 15th:
    Sleeper train is delayed by 90 minutes, and I have to change at a minor german station. This is actually not a problem as it would have arrived at 6:14AM and I have planned to spend the whole day in Cologne; and the extra train is fast. What is a problem is after I have some food in Cologne, including some yellow substance that looks like mustard but isn't, I feel awful again for the whole day. I convince myself that I'm in no imminent danger of throwing up again after many visits to the 1 euro a time station toilet, and wander a little, but only a little. My huge rucksack is slightly too long for the automated storage system in the station. So I lurk in the station, reading many of the dozens of pieces of paper I've picked up at Klimaforum, and eventually Greg Egan's Quarantine, which is largely unrelated to climate change but is a great way to relax (crazy sci-fi, if you like Egan's other work you'll like it; cyberpunk taken to its logical conclusion, crazy science, lots of fun). Get on the train to Brussels, and sleep overnight in a hostel. Still iffy about eating.

    Wednesday December 16th:
    Misread my ticket, and miss the Eurostar home. The woman on the desk puts me on the next train, 3 hours later, at no extra cost, even though it's my fault and the ticket is theoretically non-exchangeable. This is awesome as a new ticket would have cost £150 or more. Don't rely on this, I believe I once had to buy a new ticket when this happened! Eat a little - a croissant and some fruit juice. Get into london around 4PM (the train is delayed), and discover I have to wait until 7PM before I can use my off-peak return ticket back home. Spend the time writing this lot up!

    All in all a bit of a battle, and I spent a lot less time following the negotiations, and listening and chatting about the solutions at Klimaforum, than I had expected. Ironically one of the reasons I went straight from The Wave to Copenhagen was to avoid bringing swine flu from The Wave back to family some of whom have health issues; it wasn't swine flu but I'm glad it's over! But the most important thing was the demonstration, and that went really well. 900 arrests out of 100,000 people - and most of those innocent bystanders. People from all over europe, and beyond, telling the politicians that the time for bickering over short-term national self-interest is over, that climate change is the biggest threat facing mankind, and that if the politics does not change then the politicians will have to change!

    Not that I trust David Cameron on the climate (who turns down most wind applications? tory councils, of course!), although today's announcement that the tories will finally implement green mortgages is very welcome; in theory all three parties are in favour of this, but Labour has dragged its feet for far too long. Green mortgages are a tool whereby you get insulation and then pay for it out of your reduced energy bills over a period. Most people would benefit from insulation, solar water heating etc, but most people don't have the capital, and don't want to borrow large amounts to cut future bills, with the loan following them if they move but the reductions in bills not doing so. Green mortgages change this, making it easy and risk-free. Hopefully in the long term this will apply to solar water heating, solar electricity panels, etc, as well as insulation, but insulation alone will make a huge difference to our consumption of gas, oil and electricity for heating - and most people don't qualify for the various free insulation schemes available. I'm not just mentioning this because it's in the news; at Copenhagen there was a lot of talk of Eco-villages and Transition Towns. Eco-villages are green and increasingly self-sufficient rural communities. An important part of this is electricity. 70% of wind turbine applications in the UK are turned down, mostly because it spoils the view and thus reduces house prices (although a recent american study casts doubt on the reducing house prices bit). One way to get support for a wind turbine is for the people who live locally to own it. You get together, start a small company, raise a small amount of funds, apply for planning permission and borrow a largish amount from the bank to buy a proper turbine (not the little ones, the big 2MW ones). For the next 25+ years you have green energy, some of which you use and some of which you sell, plus the subsidy for wind. Some of this money goes to repaying the loan, but a lot of it can be spent on improving the local environment - which of course the village gets together to decide on. Eco-villages also often grow their own biofuel for heating (e.g. wood on sloping land), and generally try to be self-sufficient as well as green. There were some impressive case studies shown - towns and villages where almost all their electricity and heating is produced locally. Transition Towns are the urban equivalent, and often don't do much in energy, as they usually don't have good sites for wind; Scarborough's two transition groups work on food and transport, contact Pete Redwood for more information and to get involved, although they are both undergoing changes at the moment. I have no idea whether we could have a wind turbine in Scarborough, but it might be worth thinking about. With the new feed-in tariff subsidies for solar (which are actually surprisingly generous, especially if you apply before April), it may even be worth thinking about solar, on individual, commercial or council buildings (solar is dependant on sunlight, and doesn't like to be too warm, so it's not out of the question here in the North; also you often don't need planning permission now) - but as always the question is the capital outlay; hopefully eventually small scale solar will be covered by green mortgages, making it easy and essentially risk free.

    Meanwhile, protesters have got into the Bella Center. I know no more about it than you do, but personally one reason I left Copenhagen on Monday was to avoid the big direct actions. I have a lot of sympathy for their position, but on the other hand sabotaging the conference on the day when the ministers arrive is probably not productive. Unless you believe that nothing good will come out of the conference, and want to make a point for the global media. We'll see; there won't be a legally binding agreement, there probably won't be an agreement on adaptation money (the vast sums needed by the third world to develop cleanly and adapt to the impacts of climate change that already kills 300,000 of them annually, e.g. by building sea defences), and the forest protection stuff is based on offsetting (thus locking in polluting western infrastructure) and seems to be compromised in some big ways. And even if we get an agreement in the new year, the rich country targets are way below what the science requires - 25-40% by 2020 for a 50% chance of avoiding 2 degrees or more, but would you get on a plane with a 50% chance of crashing? Anything over 2 degrees is likely to be enough for most of the positive feedback systems to kick in, leading to 3, then a terrible 4, and (without large scale geoengineering) up to an apocalyptic 6 degrees. More recent science, and a prudent desire for a larger safety margin, suggests at least 40%, with global emissions peaking as soon as possible - if emissions peak in 2020, as currently expected, we will need to cut carbon emissions by 5% globally after that every year. This sounds easy, but it is in fact very hard, especially when you consider that major fast-developing nations won't peak until the 2030s (after all, China has hundreds of millions of rural and urban poor as well as its middle class almost the size of the USA) - so it might be as much as 9% a year for the developed nations, and also require substantial geo-engineering. This will be very hard, and it's never happened before. It will dominate our economy for decades if we are serious about it, and given the risks the sooner the better I say! UPDATE: Here's an account of what happened on the Wednesday from the protesters' point of view.

    Campaign against Climate Change are calling for a climate emergency response. They have 45 MPs signed up to their EDM 2057 (Climate Change 2), which calls for a declaration of a climate emergency, 1 million green jobs (e.g. in insulation, improving public transport), a 55mph speed limit, banning domestic flights, a strong public information campaign (nobody complained about war coverage on TV in WW2, and that's exactly the situation we're in, an existential threat). They were behind a rally I went to just before The Wave, which made essentially these demands. Treating the problem as the serious issue that it is will create substantial numbers of jobs, increase bills in the short term, stop most of us from flying, show global leadership instead of the current ridiculous I-might-if-you-will games, and greatly reduce the amount of trouble that we are storing up for ourselves.

    What's trouble? Crop failures, droughts, erratic weather, hurricanes, floods, poor countries losing huge amounts of land (e.g. bangladesh stands to lose 18% of its land mass, many small island states stand to be submerged completely), serious problems with water supply, many horrible things, which will likely have knock-on effects such as wars over the remaining fertile land or water resources, and huge numbers of refugees who will as always be demonised as "economic migrants". Most of these things are already happening, in India or Africa, or even sometimes the rich world, hence the figure that climate change kills 300,000 people a year already. Since the sea is generally affected a lot less than the land, 4 degrees globally means 14 degrees in the arctic, 6 in Spain and much of Australia, 5-7 in most of Africa, 6 or 7 in much of the US, and accelerating loss of ice and permafrost (two notable feedback mechanisms). Exactly how bad it will be is not clear yet but it will be bad. The interactive map on the Met Office's website talks about 20-30% loss of agricultural yield in mid to low latitudes - meaning that poor countries will be hit hard. But this is based solely on temperature, not taking into account the increasingly regular droughts and floods. And some say the temperature effects on yields could be vastly worse - a recent study talking about 60-80% loss of US agricultural productivity in a 4 degree scenario. Hopefully that study is wrong; many are! Even if it isn't, solutions may be found (e.g. with genetic engineering; the current efforts to that end are largely corporates cashing in on offsets with crops that require less tilling but more chemicals, but future efforts may be more successful, this is another reason I didn't sign the Klimaforum declaration) - but erratic weather may limit what we can do cheaply.

    However, if we let it reach 6 degrees, we are looking at a situation not seen for 100 million years; last time it happened most life was concentrated around the poles, a truly apocalyptic scenario. As a placard said at the demonstration, "The planet's fine. We're f*cked.". Although there are a few scientists, including the very notable James Hansen, who argue for a "Venus syndrome" i.e. terminal climate change, the literal end of the world as a biosphere (Venus has runaway global warming, scorching temperatures, high pressures and sulphuric acid clouds). Hopefully we won't let it get that far - at some point the people and the politicians will realise what is at stake and will do what it takes to prevent it, even if that means a lot of collateral damage from geoengineering (it probably will), and vast expense next to which current complaints are irrelevant. But if we reduce emissions now, we can avoid most such problems, prevent most of the positive feedbacks ("tipping points") from kicking in, as well as moving to a more sustainable society.

    So now we wait. In the new year there will be more battles. Scarborough 10:10 is taking a well-earned rest after showing Age of Stupid 6 times to 150 people, but will be active in the new year. Contact me if you're a local and interested in helping. If you are interested in Sustainable Scarborough / Scarborough as a Transition Town, contact Pete Redwood. Note to Freenetters: Scarborough 10:10 is the local branch of the 10:10 campaign, a small group of people running showings of Age of Stupid, stalls etc to try to get people to sign the 10:10 pledge to cut their carbon emissions 10% in 2010. The objective is to use this to influence government policy for more ambitious action - and there has been some small success in this, see the website. I've taken over co-ordinating the local group since Jane stepped down, but we are resting until January 9th, and we will likely be making a much less intense effort for most of 2010; this was another demand on my time pre-Copenhagen.

    More holiday photos are here (on Freenet). I have stripped out anything containing anyone's visible face since I can't remove them from Freenet once uploaded. Sorry I didn't get much in Copenhagen, and nothing on the main demo, but there are plenty of places you can find such pics on the net.

    2009/10/27

    Build 1238

    1238 is now available. Normally I wouldn't blog on every build, but this one is worth talking about. The most important change is that this build has fixed a whole bunch of bugs in the client layer (persistent downloads and uploads), including stalling uploads and downloads, and it has done it in such a way that old, broken uploads should fail rather than just hang forever, old downloads should complete, and new uploads should complete. Some of this may have been a relatively recent bug according to the people I've been working with (notably p0s, although evanbd has helpfully nagged me about the problem), however there are definitely some longer term issues here too, so if you have stalling downloads please try 1238!

    Another notable change in 1238 is the new CSS filter, kurmi's SoC project, has been merged. This is much more detailed and comprehensive than the old filter, and hopefully much more secure as a result: It parses everything and lets through only what it can understand, just like the HTML filter. This is the only way to be sure that a filter is secure. The old CSS filter only tokenised and didn't parse, so might (at least in principle) have let dangerous content through, and in order to prevent this it was more strict with strings etc than the new filter is. Also, the new filter supports > selectors, although not CSS3 selectors (which the old filter did, because it didn't parse them). It also has an extensive set of unit tests. Kurmi's mentor was nextgens and I finished it off, resolved parsing issues, added unit tests, and merged it. So it is a significant improvement, something that we would have had to implement sooner or later, and it might just be a security improvement.

    Another change is that the first-time wizard is now considerably more concise, while hopefully being clearer and having less FUD. If you can do usability testing, please do - that is, find somebody who has never installed Freenet, ask them to install it, and note down wherever they get stuck, what questions they ask etc. Don't help them, at least not until after they have got stuck. If they get stuck, that is a usability bug. Please send the results to the devl list, or to me via Freetalk on the en.freenet board, so that we can solve these bugs.

    Also, we have finally merged the new wininstaller, thanks to Zero3 and Juiceman, which does not create a custom user for Freenet, so should run into far fewer anti-virus/system policy/Vista compatibility issues, which were affecting very many users. Also, it includes a system tray icon from which you can conveniently start and stop Freenet if you need to for e.g. online gaming. So a bunch of important improvements, coming from various people. Worth a quick blog post IMHO.

    Next up, the new Mac installer, which will also include a system tray icon, thanks to mrsteveman1. p0s is continuing to work on Freetalk, currently working on a login system (you might have to wait a day or two for me to catch up with and deploy Freetalk as the current version is incompatible with 1238); hopefully it will be ready to be official before Christmas.

    Recent performance testing data shows that as long as you fetch immediately, Freenet can perform well, around 59 seconds to fetch a 1MB test key from a freshly bootstrapped node. Anecdotally very popular or very new keys can achieve very high speeds, maybe 30-80KB/sec. However, if you wait a few days, many of the keys will have fallen out, so it may fail and will probably be slower. Now, the long-term push/pull data (waiting e.g. 3 days between inserting and fetching) strongly suggests that some proportion of inserts are "getting lost" on the initial insert, pushed to the wrong part of the network due to backoff or some other reason. I am going to implement some test code to verify this theory; if it turns out to be correct, MHKs (duplicate the top block) and some tweaks to splitfile redundancy should be a fairly easy workaround, and improve persistence considerably.

    We are also planning some more work on network level diagnostics. Currently, we have probe requests. Probe requests are a kind of request that does not return data, is routed to a location rather than a key, and returns information about the nodes along the way - their locations, UIDs (which are only used for probe requests) and whether they are backed off. This is conceivably somewhat useful to an attacker, although swap requests (which are a vital part of routing) reveal most of this information without probe requests. Probe requests help us to understand what is going on on the network, whether routing is working, whether churn is a problem, and how big the network is. But we are planning to get rid of probe requests in favour of more specific, more useful mechanisms:

    • Random-routed requests returning the node's uptime and a unique ID for the node: This will enable us to do a quick estimate of the network size, similar to tag-and-release statistical sampling.
    • Random-routed requests for actual data: These would be random routed for some number of hops and then turn into normal requests. This would allow us to test fetching a given key from a random node, so we can get a better picture of performance across the network as opposed to having to bootstrap a new node (for a bad result) or use a well established developer node (for a good result). This might also be useful to e.g. freesite authors, because you could determine whether given content is still available.
    • Tracer requests: A special key type used only for testing, kept in a separate datastore. You could insert a test key, and then fetch it some time later or from a different node. Both on insert and request, the nodes' locations and UIDs will be returned.

    Obviously these are temporary measures to help us to understand what is going on on the network and thus improve performance. One of the best ways to improve Freenet's security and survivability is to get a bigger network IMHO, and we will achieve this by improving performance. Probe requests suggest Freenet is around 2500 nodes at any given time, up slightly from a few months ago, while performance has improved.

    Bloom filter sharing is still planned for 0.8, but may be postponed, it is more important to deal with the other network issues first as they may be easier to fix now. Most of the hostile environment / security stuff we have planned for 0.8 has already been implemented, there is more work to do on usability and integrated functionality, probably more work to be done on persistent downloads (e.g. backing up the database, less disk access for downloads), fixing XMLSpider, fixing the uninstall survey, etc. Sashee's web-pushing branch will be merged when it is ready, this might be after 0.8.

    We have approximately 20 weeks left on current funding, this will be enough to get a reasonable 0.8 out, but there is much more we'd like to do. Hopefully performance and data persistence will be improved considerably in 0.8, and we may implement Bloom filter sharing, but there is usually more work to do on that, e.g. there is a second phase for Bloom filter sharing, and there are some plans for improving performance on slower links by reusing bytes from padding for transferring data. There is widespread support for a "pause mode", enabling nodes to be sent to sleep for a while while gaming but to be able to recover quickly once they are wanted again. Fully distributed, web of trust based searching is planned, and might eventually have a secure reinsert on demand mechanism. Passive requests would greatly improve chat apps and similar things, speeding them up, making them scale better, and reducing their network impact. Long-term requests would make low-uptime nodes and disconnected darknets much more useful. Sneakernet is obviously essential for some hostile environments, and the combination of sneakernet and long-term requests would give us something that could run anywhere you can swap USB sticks, or have phones do bulk transfer over wifi when you are close to your friends. Randomising the keys for inserted data would help considerably with security for large inserts, but there are issues with reinserts, so it might be a configurable option. Encrypted tunnels would improve security at the cost of quite a bit of performance, but combined with randomised keys we could just tunnel the top keys. Freemail should be integrated with Freetalk, and might have to be rewritten. Freetalk, WoT and all chat apps will have major scaling issues but hopefully these can be overcome. evanbd's Fritter microblogging app is a great idea, and Artefact2 is working on a blogging plugin which might be included in 0.8. Revocable SSKs (RSKs) are an important feature for hostile environments and will allow us to have an official project freesite. Filters for more content types are vital - particularly audio/video. Limited Javascript support, and untrusted/semi-trusted plugins, are possible given considerable work and various configurable security choices/tradeoffs. Even a form of video streaming may be possible. And swapping could be much faster and more secure. So there is a huge amount of work to do for Freenet to become a really awesome tool for both filesharing and censorship avoidance, and I hope that we may obtain further funding so that we can achieve most of it. To donate to FPI, click here, or to help us with code, translations, documentation etc, please contact us!

    2009/10/16

    Build 1237 and related things

    1237 is now available, please upgrade. This fixes a bug in the client layer which was causing downloads not to complete (probably not the only one, sadly), fixes a minor exploit in the content filter, makes Library official and auto-loads it over Freenet, has more work on evanbd's hourly by-HTL stats logging, and some changes to the FCP feeds API.

    Library is the new search plugin built by infinity0 and mikeb. It is integrated into the fproxy user interface, supports phrase searching, boolean operators, basic page ranking, and is a big improvement. But the real story is that Library supports the new Interdex search file format, which is an on-Freenet b-tree, which should be much more scalable than the current index format. In the medium term hopefully it will also support distributed WoT-based searching, which could form the basis for a really interesting filesharing system. The reason it's called Library is that it is not just the user interface: XMLSpider will be adapted to talk to Library to do its index writing. Hopefully in the not too distant future, the spider will use Library, and will write indexes much more quickly by bunching writes together in memory and then linearly rewriting all the indexes. We can do this with the old format, which would make it much easier to run a spider (as currently it can take 4 days to write a large index if you don't have a pair of Cheetah's or an Intel SSD), but it is even more interesting with the new, scalable format.

    The content filter exploit was simply a failure to encode type parameters properly. The worst that it could do was allow a malicious freesite to inline lots of gigantic files as inline images/frames without querying the user about their size, but nonetheless it needed to be fixed. This was discovered during my work on merging the new CSS filter, which is more secure (because it parses CSS properly), and more functional (because it fully supports the spec, or will by the time it's merged). This will be merged within the next few days hopefully. Kurmi had done most of the work for his Summer of Code project, but there were various serious issues that needed sorting out. There is also an ATOM (but not RSS) filter to merge, a Thaw filter, and someone is working on an SVG filter (all of these should take much less time to review and merge). In the medium term, there is much more work to do: we need SVG in XHTML, XHTML in ATOM, possibly RSS, and hopefully some audio/video formats. Long term it'd be awesome if somebody could write a Javascript filter; this would have to replace all objects and functions that could be compromising with safe versions, it'd be a fairly big project, but it's quite possible for a good coder. None of the current devs know javascript well afaik, but maybe I'm wrong about that. To make eval() work, the filter itself would probably have to be written in javascript, or translated into it via GWT. Another interesting point: flash nowadays is essentially javascript with a different library and a binary format, so a javascript filter would go a long way towards flash support. Of course, on top of all this, you have all the worries about timing/datastore probing attacks etc, so scripting on Freenet is hard, but it's by no means impossible...

    In the short term, evanbd's work on stats could turn out to be very important. Recent tests show that if you insert data on a well established node and fetch it immediately from another well established node, you can see transfer rates in the region of 40-80KB/sec, with one report of rather more. However if you wait 3 days, there is a disturbingly high failure rate - although that seems to have improved a bit since the last bunch of routing fixes. evanbd is working consistently on stats and simulations, and this has been very helpful, with the recent routing fixes coming out of it. Soon I will implement "tracer requests"; these will insert data using a special key type, record an identifier for the nodes it is inserted to, and then when we fetch them we will see where the request went to likewise. We will use a new node identifier that doesn't correlate with anything, and IMHO this is no more dangerous than probe requests (which we will be turning off), especially as it doesn't use "real" keys (we will create a very small datastore for such test keys). We think routing works pretty well, a popular theory for why data retention is so bad is that we don't handle opennet churn very well, but there are other possibilities, hopefully tracer requests will help us to understand what is going on. Given that Freenet can be fast, with some fixes and probably Bloom filter sharing, hopefully we can have a substantial performance improvement.

    Other stuff since last post, which you probably already know about: the routing changes I mentioned above: for a while now we have taken into account not only our peers' locations but also our peers' peers' locations, but we were not handling this properly. The seednodes file is now auto-updated, and the Add a Friend page now serves the latest Windows and Java (OS/X and Linux) installers. We now have a Dutch translation, updates to French, German, Italian and Chinese, and more of the web interface can be translated. 1234 included some important work on memory usage, 1235 included some optimisations to the client layer (persistent downloads and uploads), trying to reduce database accesses and therefore disk i/o, cpu usage and memory. 1236 included some important work on ULPRs. Ultra-Lightweight Passive Requests are an optimisation introduced some time ago which is designed to make polling apps such as chat clients more efficient and faster, by remembering for an hour which nodes have requested a specific key, or we have requested the key from, and telling them if we find it. In 1236, we propagate ULPRs more effectively, more securely, and more accurately, as well as fixing a bug that may have been preventing it from working since July. Other recent work includes plugins, the datastore, the client layer, bandwidth limiting, node shutdown, automated tests and simulations, and more.

    However, I have taken some time off work lately - partly because of minor ailments, but mostly because of getting involved in climate change events (all over until December for me now). Yesterday was the Blog Action Day, an attempt to have a co-ordinated campaign of blog entries about climate change; obviously I missed it! Unless strong action is taken to prevent severe climate change (defined as more than 2 degrees warming), we may find ourselves without "a planet similar to that on which civilisation developed", in James Hansen's words. Of course, something will be done - the question is whether enough will be done (Hansen and the small island states are arguing for a much tougher position than developed and rapidly developing nations are willing to commit to right now). The Met Office, for example, is saying that 4 degrees by 2060 is possible. Have a look at the New Scientist coverage if four degrees doesn't scare you. It won't be evenly distributed: some areas would have no warming, some would have 10 degrees or more. And as always, the poor are hit hardest, because they live in places which are already seeing severe changes in the weather, because they live off the land, and because they often have no resources to adapt with. Christian Aid estimates there may be a billion climate refugees by 2050 (others estimate 150-250 million), although some argue that those who are worst affected are probably also least able to migrate... The global poor are paying for our pollution, and the situation is going to get a lot worse even if we do take strong action. If we don't, vast areas of the world will likely become unfarmable due to drought, floods (often directly following drought), desertification, more natural disasters and so on. All of these are happening now, and have gotten significantly worse over the last 50 years, with only a 0.8 degree rise in temperatures since the industrial revolution. Maybe we'll be able to geo-engineer our way out of the most severe consequences, but it will be much cheaper and less painful to deal with the problem now. What must happen is drastic cuts in emissions - 40% from 1990 levels by 2020 in the rich countries, and some meaningful progress from the fast developing nations - and large amounts of cash to help the developing countries adapt and to grow in a low-carbon way (at least $150B). Neither of these demands is likely to be met fully in Copenhagen, with the US position constrained by domestic politics, India unwilling to take on any binding targets, and the Chinese so far only have vague mumblings about intensity targets, and demanding the right to keep on increasing emissions until 2050, by which time they will occupy the entire carbon budget for the world and then some (recent research shows they could peak by 2030 without any serious economic/poverty impact if they wanted to).

    Sorting out this mess will be painful, energy prices will have to rise considerably and flying will probably in the long term become the preserve of the rich once again (but we'll have better rail), but in the near future a great many jobs could be created. Unfortunately many countries - particularly Britain - have squandered obvious opportunities for green recovery, preferring to spend vast sums on propping up retailers and car companies. Nonetheless there is a short-term economic case for action. The long-term case is rather more stark: the Stern review, years ago, said the economic cost of taking action would be around 1% of global GDP, and the cost of not taking action would be around 5%, but he also gave a worse-case estimate of costing 2% to take action and 20% to not take action. He has recently said that in the light of recent science the latter is probably more accurate. 20% of GDP is equivalent to two world wars and the Great Depression, and we need to start taking it that seriously!

    So if you have not already contacted your representatives (national MPs and if appropriate MEPs), and told them that you care about the future of the planet, that Copenhagen must deliver, and that your nation must take drastic action both to lead in negotiations (40% by 2020, as Norway recently pledged, and their share of $150B+ of adaptation funding) and to build green infrastructure instead of brown infrastructure (Kingsnorth and Heathrow 3rd runway cancelled in the last month, yay!), please do so. There are many resources on the web and involved organisations e.g. 350.org, Stop Climate Chaos, Campaign against Climate Change, Operation Noah, Christian Aid. Read their resources and then email your representatives (UK) in your own words, because this is far more effective than clicking a standard form letter. After that, think about attending a relevant national demonstration - in the UK, The Wave is planned in London on the 5th of December. The talks themselves are on the 7th to the 18th of December in Copenhagen. The EU negotiating position will be finalised on 29-30th of October, so if you are in Europe you should write to your MEPs before then and maybe consider demonstrating during the session (although I'm not sure anything's organised yet, Climat et Justice Sociale Belgium may have something). The 24th of October is a global day of action for 350.org, there is probably an event near you, unless you are in the UK. In the UK, a colossal scheduling cock-up means there is a major anti-war protest (Iraq killed around a million people, climate change already kills that many every 3-4 years), the Anarchist Bookfair, and other local events going on... good luck, I'll probably attend the York Wave rather than try to organise anything in Scarborough. Recent events I've been involved in include Operation Noah's Climate Change Day of Prayer, for which I printed out far too many pages of printed resources (I suspect I'll be able to unload some of them though), and Tearfund's climate justice evenings (an equipping meeting that was interesting and encouraging), for which I had to sleep over in Leeds (one small disadvantage of living in Scarborough is transport is mediocre). I'll be in London for The Wave, and in Copenhagen for some of the period of the negotiations (if you are coming talk to New Life about accomodation, you probably won't find a hotel or a hostel). How much difference demonstrations outside international negotiating events make, when nobody can tell who are their constituents, is an open question, but it's good to have a presence, and the alternative summit should be fun. You can get an idea what is going on from the Adopt a Negotiator blog (on Freenet via Freerunner, RSS). Oh, and tomorrow Climate Camp and others will be shutting down Radcliffe on Soar, a major UK coal power station which produces more CO2 than a large group of least developed countries put together...

    2009/09/04

    Build 1233 and related things

    Much work since last time! The XML bug, scaling max peers with bandwidth, plugin loading/update over Freenet, the minimalist theme and the new status bar, some minor filter changes with more to come, some important datastore fixes, more internal documentation (javadocs) and lots of bug fixes!

    First off, the XML bug. This has been extensively covered elsewhere so I'll give you the basics and the current status: a remote code execution bug in a widely used XML parsing library is present in Sun Java up to 1.5.0_20 and 1.6.0_15 (amongst other things). Freenet will therefore refuse to load official plugins which parse XML on older versions of Java. 32-bit Windows users should be largely unaffected as the auto-updater will pick it up. 64-bit may have a problem as the auto-update only works for the 64-bit JVM and we need a 32-bit one (we are considering solutions given the increasing prevalence of Vista/64); download the latest 32-bit JVM and you'll be fine. Linux users may have to obtain the packages directly from repositories (e.g. the pool directory for debian), if your distro hasn't pushed the updated versions yet. OS/X 1.6 has the new JVMs, 1.5 update 5 will have it when it is out; currently Mac users have a big problem, but this will go away soon, so it is unclear that it is worth importing a separate library for it just for Mac users for a short period...

    Scaling max peers with bandwidth: Freenet will now use between 10 and 35 peers depending on your bandwidth usage. This should improve speed on both slow and fast nodes (lower overheads for slow nodes, more routes for fast nodes), and it was the number one uservoice request for quite some time. Please let us know whether Freenet has improved performance-wise since 1231! Again this has been covered elsewhere. The number one item on uservoice is now "one GUI for all", which I interpret as a plea for more functionality integrated into the web interface...

    Plugin updating over Freenet: We now load plugins over Freenet by default! Previously we were loading official plugins from emu directly, which is very hazardous and may not be possible at all in anywhere vaguely hostile, not to mention giving away your IP if you are on darknet. We didn't do this automatically - if a plugin was out of date, we'd ask the user whether to fetch it from emu - but it meant anyone unable to access checksums.freenetproject.org couldn't keep their plugins up to date - and the old versions frequently would stop working because of changes in the core. In 1233 for example there are core changes affecting 6 plugins, all of which will now be automatically pulled over Freenet on starting up; these fetches should work, but if they don't, Freenet will allow to either give up, let the node continue to try to fetch the plugins, or fetch them from emu. This is important now, especially for hostile regimes, but it is essential with Freetalk and Artefact2's upcoming blogging plugin!

    Recently I've been doing a good deal of code reviewing and other miscellaneous stuff. This is great, because it means we have more volunteer work going on! Artefact2 has recently started contributing to the web interface (new theme, status bar), statistics (persistent uptime and bytes transferred stats), plugin infrastructure (localisation and data storage), and is working on a blogging plugin, which is something we should have had years ago. evanbd has suggested we implement a breakdown of request success rates by HTL, which has produced some remarkable results (have a look!), and promise to reduce the amount of alchemy that has crept in, largely because of the considerable difficulty of any objective measurements, since Ian declared the end of the age of alchemy all those years ago. He has analysed the data and helped with theory, and is working on a nano-blogging plugin and some changes to splitfiles. kurmi's XHTML and BMP filter changes have been merged, and his CSS, Thaw and ATOM filters will be merged soon. infinity0, Artefact2 and others have been working on a new freenet-ext.jar (please test and report back!) with non-segfaulting 64-bit FEC support, FreeBSD support, etc, please test it! Hopefully infinity0 and mikeb will be able to fix any remaining issues with Library so it can be the default new awesome search plugin (even if it may not have full distributed search yet), and more importantly, update XMLSpider to produce indexes (in the new format, supported by Library alongside the old one) much more quickly. Zero3's (and Juiceman's) new windows installer (please test and report back, no update.cmd at present, and not fully compatible with the old one) is nearly ready, but it has been nearly ready for some time so we'll see; it features a system tray icon, and doesn't create a custom user, so should have less of a problem with antivirus, system password policies etc. mrsteveman1 has had some trouble getting a systray icon to work on OS/X, but hopefully there will be one at least for some versions. On Linux, we will have to use a Java version, but I have been provided with some example code. saces has been working on various internal things and plugins, hopefully his multi-container freesite insert code will be made default before 0.8, it would be a huge improvement for big freesites. ljb's persistent node to node file transfers haven't hit fred-staging yet but we passed him (and the others) so I am confident they will. Our translators continue to be helpful, but as always we need more languages. There have been hints about Russian and Farsi but so far not much has actually happened. A possible Farsi translator said something about RTL support, we need to look into this but it might be easier with some help from the would-be translator; if you want to translate Freenet, please let us know. sashee's web-pushing branch is also approaching being merge-ready, and hopefully will be merged within the next month; loading the Activelink Index with pushing is quite impressive. Work on Freetalk (mostly by p0s aka xor) is slowly progressing, it should work in the current build but expect incompatible updates, bugs and so on. However I do read the Freetalk boards, so it may be the easiest way to contact me anonymously (Freemail is very temperamental for me). I've probably missed out a bunch of people and I'm sorry, I'm just trying to give a flavour of the current situation: There is a lot going on, most of it by new or re-engaged volunteers!

    In the reasonably near future, changes to splitfiles (reinsert the top block when it takes more than a little while to fetch, split data evenly among a splitfile, make smallest segments a bit bigger) should improve data persistence somewhat, and if necessary MHKs will improve it further, without involving major changes. There are a few security and hostile environment matters that need to be dealt with, or that we would very much like to deal with: use more rounds in AES, encrypt plugins' on-disk data (and completed download list files), keep a copy of the latest installer on disk or at least link to it on the Add a Friend page, possibly RSKs (revocable SSKs, so if the private key is leaked you can still tell people this), some means to prioritise a modem-based darknet peer so it has some chance of its requests being answered, and possibly a Friend-to-Friend web proxy system to get around national firewalls with no frills attached (this is actually a lot easier than it sounds!). Freetalk is essential for 0.8, and hopefully p0s can get there with minimal help; an RSS filter would be nice to go with the ATOM filter, there are bugs in USKs but also there is the potential for major improvements, db4o auto-backups are essential, as is better usage of the datastore, and we also expect a few more usability/UI improvements (and I mentioned a few things mentioned already). Most of this is relatively easy. However, I still plan to implement Bloom filter sharing. All the indications are this would be a significant performance and data retrievability gain, and the initial implementation shouldn't take more than a couple of weeks, now that we have the prerequisites (caching changes) sorted.

    Many other features have been postponed until 0.9, and many of the ones mentioned here may also be. Fortunately we have a reasonable amount of breathing space, as we have spent around a third of Google's $18K so far. So 0.8 should be really great! There have been a number of positive comments lately from users trying Freenet again after some years, and reviewing Content of Evil was quite educational, showing how far we have come. While on the one hand we will run out of funding sometime early next year, 0.8 could be a breakthrough in many ways, and I believe we will find additional funding - whether it be through paypal micro-payments from many concerned individuals (and I have been entirely paid off these at various points in the past, although I had lower costs then), from groups interested in hostile environment deployments (in some cases Freenet may be the best solution), or from some other source.

    2009/07/30

    another amphibian

    Build 1226

    Build 1226 is now out. A lot has happened in this build. Firstly, we have finally fixed The Register's attack. Basically the problem is freesites you browse, files you download etc were cached in your node's datastore, speeding up future accesses but making it possible either to look at your store and find out what you've been browsing, or to remotely probe your node's datastore (assuming the attacker is connected), and see how long it takes to fetch blocks to try to work out whether you have fetched particular files. What we do in 1226 is only cache in the datastore when the HTL is over some level, so your request will not be cached unless it is at least 2 hops away from the originator (often more).

    Why has it taken so long? Well, we had assumed it would be fixed when we implemented premix routing or encrypted tunnels. These would also make a lot of other attacks a lot harder. Recently it has become clear that these will likely be a lot of work and may have a significant performance cost. One reason for these changes (which may reduce performance slightly) is that they are essential prerequisites for Bloom filter sharing, which should greatly improve performance, and will be implemented in the coming months (see more below). But another reason is that in many hostile environments, seizing a node and checking what people have been downloading is a very real threat, and up to 1226 Freenet left far too much evidence...

    1226 allows you to configure the tradeoff between convenience, performance and security. Physical security levels now range from LOW to MAXIMUM. At LOW, nothing is encrypted by Freenet (but you might have whole drive encryption). At NORMAL, your node.db4o (downloads/uploads database) and your client cache (a new kind of datastore used to avoid fetching the same freesites repeatedly) are encrypted, with the keys stored in a file called master.keys. Securely delete that file and both are useless, and so are all the other temporary files as their keys are kept in node.db4o. The panic button has been fixed and reinstated, and will do this for you (but watch out for data-journaling filesystems and flash devices). On HIGH, you can set a password for master.keys, and you cannot access the download/upload queue until you enter this. On MAXIMUM, the keys are random on every startup, so there are no persistent requests, but you can still fetch big files as long as they complete before restarting.

    Freenet migrates your node.db4o automatically on startup if it needs to be encrypted, it supports defragmenting node.db4o on restart (mine shrunk by a factor of 10), which it will do when you upgrade, there are security and performance improvements to ULPRs (ultra-lightweight persistent requests, used to speed up polling e.g. Frost, FMS), ljb's mostly internal work on user-events (did you know your node has an RSS feed on /feed/ ?), and bugfixes and optimisations. So that's 1226, please go get it! There may be a small performance hit in the short term...

    The big performance feature for 0.8 is Bloom filter sharing. I have had a lot of help with working this out from evanbd, but originally it was Oskar's idea. Basically, we tell our peers what keys we have in our datastore (using a highly compressed data structure called a Bloom filter). This should cost around 300 bytes/second on a node with 30KB/sec upstream, but it enables us to short-cut and route directly to the node with the data when we happen to be nearby. For really popular data this means we can probably fetch it from one of our direct peers, and bypass load management. For less popular data it means we check 20 times more nodes' datastores, so once we are in the rough area where the data should be, we should find it efficiently. Hence it should improve performance, in terms of transfer rates, latency and how much data can actually be found, by a considerable amount, but we don't know by how much yet. Unfortunately we can't just deploy this for security reasons - hence the work above to ensure that the only stuff in our datastore is data which other peers at least 2 hops away have requested.

    A big problem with Freenet is that it is relatively difficult to communicate with other users. This is partly because of the lack of a good integrated chat system, which p0s is working on with Freetalk. But it is also a fair amount of work, especially for less technical users, to set up jSite, build a site in HTML and upload it. It appears a basic blog engine can be written in 800 lines of php including templates ... so it might make sense to spend a few days on this as a relatively high priority, to make it easier for new users to contribute content. Freetalk would handle both comments and announcing a site. Granted, Thingamablog does something similar, but I don't think this has been maintained lately, and it is a separate application which you have to download; it should be a web interface, just like Blogger. This is not just a question of what will get us the most users (and hence funding) in the short term: If Freenet were to be used in a hostile environment (e.g. China), people using it would probably not be very technically literate, would be used to using web services to create content, and so on. Security matters, but usability matters too.

    In other news, our Summer of Code students are doing really well this year. My student sashee has built a javascript push framework for Freenet, which in the web-pushing branch is used to update fproxy's site loading progress bars more quickly, smoothly and cross-platform (works on webkit unlike the current code), to update the downloads page on the fly, and to show progress for inline images while not blocking lots of browser connections (most browsers are limited to 8 or less to a single host). My other student, infinity0, is working on a distributed searching system called Interdex, which hopefully will transform both searching for files and searching for freesites. mikeb has done some great work on the existing search system (both spider and search plugin), ljb is working on friend to friend functionality (exchange of files, bookmarks etc between darknet peers), and kurmi has already written filters for BMPs, ATOM feeds and a new CSS filter, which will be integrated shortly. We will run out of funding in approximately January, but it should be a very interesting year!

    2008/02/29

    amphibian

    Chat and other matters

    Freenet 0.7 is coming along nicely, although the tentative feature freeze is by no means absolute. Ian wants to release in March; a release candidate towards the end of March seems feasible at this point. ULPRs and related code (the cooldown queue) are pretty much finished, and there is a feature freeze, although some HTL changes have been necessary and one more may be in the next few days. Bugfixing is in full swing; the spider and the transport layer have been recent targets. We still have way too many timeouts, but many nodes in one VM simulations (freenet.node.simulator) have helped to find the cause of some...

    However, one big piece isn't ready yet: a working, user friendly, hard to spam, bundleable chat client. The spammer seems to have taken a day off today, but until today he has relentlessly spammed all the default boards and many more popular boards, making them unusable. IMHO Freenet is a community, and for any community to function there must be a usable means for chat. Right now FMS is apparently working and under heavy development, but in C, and using an NNTP interface; seagull is apparently working on porting it to java, but we haven't seen his work yet. Ian is of the view that Frost and FMS are separate projects to the node and we shouldn't wait for them. What do you think? Answers to an appropriate forum, should we:

    1. Unbundle Frost. We could just not ship Frost. This has the advantage that we could meet our deadline for shipping 0.7.0 more easily. And we wouldn't be shipping any known-vulnerable or known-broken software. But IMHO it would reduce user retention dramatically, at least by 50%.
    2. Ship Frost as-is. If Frost isn't being actively DoS'ed, we could just hope that the spammer is a friend (e.g. an FMS advocate), and ship Frost anyway. If he isn't, he will DoS Frost as soon as we ship 0.7.0, even if he takes a break in the meantime. IMHO it would probably be a good idea to mention the fact that Frost has been actively DoS'ed and remains vulnerable in the 0.7.0 announcement: this will put off users, but if they have to discover it themselves, that will put off even more users. Our problem is not getting people to install Freenet, it's user retention: we have huge "join-leave churn".
    3. Make it a release blocker and act accordingly. Review third party code, help with porting FMS to java if necessary, ship 0.7.0 with a bundled, working java port of FMS, probably with a web interface based on Worst. Have an official plugin (with anonymous contributors such as seagull, but also with non-anonymous contributors such as me) implementing the web of trust and exposing it via FCP, and another plugin (or an option in same one) implementing a web interface. You can get version 0.3 of the Worst source code here (if you have a more recent version then please post it to fms or to the fms board).

    Note that because FMS is in C and therefore not bundleable, I don't currently run FMS. I may run the java port of FMS when it becomes available - hurry up seagull! I've seen test posts, so it can't be too far away...

    In other news, we should seriously consider whether to take part in Google Summer of Code this year. I was approached on Wednesday by an enthusiastic and apparently competent would-be SoC student, who now has an SVN account and is working on a bug in the transport layer. Last year SoC was a lot of effort and a partial success - a lot of the code was of poor quality, but some of the devs have reappeared or not disappeared. If we do do SoC this year we will probably take fewer students, and we'll certainly want to be more careful in selecting the student (and not the proposal; proposals can be renegotiated).

    2007/10/26

    amphibian

    Radical update

    A standard criticism of the Christian gospel is that the various resurrection accounts (bibles on freenet here or on the web here) are inconsistent. And at first glance they disagree on many important matters. I just found this gem (from the web), which makes a persuasive case that they can be pieced together into a coherent whole. Why does this matter? Well, if Jesus did rise from the dead, he probably is the Son of God. If he didn't, then Christianity (with all of its widely acknowledged moral contribution to the world) is nonsense. You can of course claim that the whole thing was made up, so it doesn't matter that they are consistent; but they were clearly written by different authors, and reasonably early although probably not direct eyewitness accounts; and Matthew's gospel was written to jews so claiming that the guards' tale was widely known among the jews probably would have been a bad idea if it wasn't true. Have a look!

    I can't write a flog entry without a Freenet-related update. And I should probably explain what's going on locally too. Firstly, emu (the freenet project web/mail/etc server) has been down for a few hours, it will be back up soon, due to its being moved from London to Manchester to cut costs for bytemark; we're getting a free memory upgrade in the process. Secondly, we (family) are moving to Scarborough (wikipedia) on Thursday and may not be around for a while after (hopefully we will have a phone line when we arrive, so we should have dial-up, but there will obviously be lots of stuff going on e.g. unpacking). Reasons for the move include health (Bristol's air is really unhealthy), and closeness to relatives, and a much bigger house. Working from home has certain advantages!

    Now for developments in Freenet itself: Build 1069 features a major security fix for our connection setup code: It was possible for an attacker to do a man-in-the-middle attack using weak diffie-hellman keys. Tor had a similar vulnerability in 2005, and Freenet 0.5 still has this problem (we are not going to fix it, we will apply a patch if you send in one, but Freenet 0.5 is unmaintained and unsupported). 1069 is mandatory on Tuesday, so upgrade! Hopefully the auto-updater will upgrade your node automatically, or is even now awaiting your clicking the "Update Now" button.

    In other news, slow progress continues to be made towards full opennet. Path folding is working, UP&P (automatic port forwarding and IP detection if your router supports it) is working, reconnecting after reasonable downtime is working (I dropped my darknet peers, shut down my node for 12 hours, and it got back on its feet with opennet soon after starting back up; admittedly I'm not NATed...). 1070 will have a fairly major security fix relating to path folding (traffic analysis related to packet size ... sorry), and the major items remaining before we can implement automatic bootstrapping are automatic detection of port forwards (to get rid of the annoying connectivity messages and automatically detect if you are eligible to be a seednode), and anonymous connect crypto (the current link setup assumes both sides already know the other, which is great for darknet, and even for opennet, but not for bootstrapping opennet). So hopefully well before Christmas we will have full opennet support - full meaning you don't need to exchange noderefs with strangers, ever, even when you first connect to Freenet.

    That's no excuse for running Freenet in its most insecure mode though! If you know somebody who has a freenet node, it is a really good idea to connect to them rather than to a total stranger chosen by the opennet logic. For several reasons: Firstly, your friend is likely to be more trustworthy than a random stranger, and Freenet is not perfect: those you are connected to can try to analyse your requests (see also the wiki). Secondly, it's not a random stranger: an attacker can choose to be connected to you relatively easily. Thirdly, your node is not invisible, an opennet node can be detected relatively easily, and then blocked on e.g. the Golden Shield (many western nations have some form of ISP-level blocking even now). And it is likely that one day Freenet will be illegal, even where you live, whether by the IPRED2 (pending, EU), the DADVSI (passed, France), or some other legislation targeting copyright infringers, terrorists and paedophiles (but most likely the first). Oh and you americans, you're not safe either: they've already tried for mandatory trusted computing once. Opennet is a transitional phase, a means to an end: Your node will automatically reduce the number of opennet peers (Strangers) it uses as you gain new darknet peers (Friends) (thanks to somebody on Frost for the idea). In the long run there will be no opennet. But in the short run it should allow much faster growth and a much easier install for most people trying Freenet.

    Apart from that, there has been some other progress: JFK has been merged (new connection crypto setup, strongly resistant to denial of service attacks), there have been many bugfixes including some fairly juicy ones (continually polling USKs, node stopping sending requests due to stuck inserts, etc), better IP detection (we run the plugins in parallel as they can be slow), translation updates, major bugfixes and improvements to jSite (it is now able to insert large freesites without silently dropping files after the 1000th), XMLLibrarian, the freesite search engine, is working and has a default index (type XMLLibrarian# or XMLLibrarian* into the plugins box to load it via HTTP; the latter will reload it on every startup), you can now tell a darknet peer to drop you when you remove it, datastore performance fixes, faster HTML filtering, cruft removal, and much more, all since the last update.

    But on the whole, progress is relatively slow. Why? Partly that life keeps getting in the way for me (don't worry about your donations, I charge FPI by the hour!), and largely because we have relatively few volunteer devs at the moment, although we just gained an anonymous freemail dev from Frost (Freemail is now a plugin, and appears to work at least sometimes: install it by loading Freemail* from the plugins page [ will reload from HTTP on every startup ], and send me a freemail!). Freenet is really a much bigger undertaking than the current team can handle in a reasonable time. If you want to help build a critical piece of a free online future, please contact us on IRC, the devl lists, or Frost. If you're interested in becoming our second full-time paid dev for Freenet, that might just be possible (no promises, sorry...), I advise you to start coding and contact us once you've established a reputation for being a capable coder (after say a month); that's what I did.

    And the Google Summer of Code 2007 is well and truly over, I've sent a final round-up to the development list; we learned a lot from this year, and that was cemented by and added to at the conference Google hosted for us (which originally I went to because I didn't want nextgens to be the sole voice representing us! It was great though, much better than last year). Many thanks to Google for SoC, even though many of our students were disappointing; we got some useful code out of it, and at least two of the devs will hopefully stay on. Next year, if we are accepted, we will do things a little different!

    Expect the next update when you see it!

    2007/09/06

    an amphibian

    Something happening?

    Google Summer of Code is finally over. Yay! Several of our students are likely to stay on as devs, and some useful code has come of it, although most of the projects were not fully achieved. Rough summary:

    • Swati Goyal worked as my student on the XMLSpider and XMLLibrarian plugins. While there remains a lot of work to be done for these to be really good, both now work reasonably well. You can spider the freeweb with the spider, it will produce an index in myindex7/ every 10 minutes; copy that to a separate directory, insert it as a freesite. Then feed the URL you get into XMLLibrarian, and you can keyword search the Freesites it found. For obvious reasons the Freenet Project cannot maintain an official index, hopefully several anonymous persons will maintain indexes. You can group multiple indexes into a folder. Eventually we will have embedded search forms in freesites.
    • Mladen Kolar worked as my student to develop a comprehensive yet simple to use FCP library in C++ using Boost - a library to make it easy for C++ developers to write Freenet clients. This seems to work, but it needs documentation and better example code; hopefully this will be dealt with soon. It may also eventually be wrapped in other languages.
    • Vilhelm Verendel has been working on simulations over the summer. He's one of Oskar's students, and has provided valuable input on various aspects of routing and swapping. Most recently, after a paper confirmed what we had suspected for a while, vive's work enabled us to tweak the swapping algorithm to prevent the network degenerating into a state where the entire network is clustered around a small number of locations, either due to churn or malicious attack (some believe we have recently been attacked on this level...).
    • Frederic Rechtenstein has built Echo, a blogging wizard/plugin for Freenet. I understand that this is very close to being usable, but it has some packaging problems. The objective is to make it as easy to create a blog on Freenet, in a completely decentralised way, under your control, as through any of the major commercial blog hosts.
    • Alberto Bacchelli has been working on unit tests for many of the support classes that Freenet depends on. In the process he has found a number of bugs.
    • Srivatsan Ravi has been building new link layer encryption code, which will be resistant to denial of service attacks. This will be merged when it is ready.

    Apologies for the limited progress with Freenet recently: I never seem to finish anything, life has been rather mad lately, it'll be sorted out soon (after we move)... Current priority items:

    • Opennet - Universal Plug and Play support for the opennet port, allow reconnection for a short period after downtime, announcements - we need to finish opennet ASAP, this will hopefully reduce the number of people who leave because they have to perpetually chase noderefs, although it's by no means a long term solution: you will all need true darknet peers eventually, because they will come and harvest the opennet and render you unto the people with the unusual definitions of torture (he resigned, yes!).
    • Ultra-Lightweight Passive Requests - necessary for outbox polling (spam resistant) Frost, useful in general... started but not finished, current priority depends on whether progress is made on Frost...
    • LRU not preserved on datastore recovery - we need to fix this, it causes loss of content (SSKs are lost too :( ).
    • Probe traces - we are using this network-level debugging tool to explore the network topology and prototype new routing mechanisms, specifically the rabbit hole avoidance protocol. At the moment it isn't working. Arguably the rabbit hole avoidance isn't needed on a true opennet - but a hybrid network is bound to have pockets of nodes, dungeons, which are difficult to get out of once entered, and this is why we need to consider how to deal with this.
    • SSK request flooding - it seems that we send a lot of SSK requests for ARKs and so on, to the point that the spider doesn't work if its priority is too low. This needs investigating.

    Finally, I wasn't sure whether to post the following: some folk might get the wrong idea, and it does break my quasi-anonymous persona somewhat (not that I'm actually anonymous, of course..). But I promised to host it, so you'd have found it eventually, and more importantly, it might do some good: ritual humiliation (completely unrelated to Freenet (on the web).

    2007/06/14

    a colorful amphibian

    1037

    Upgrade to 1037! Really! The long-awaited Update over Mandatory support will make life much easier for the developers when debugging the network, because nobody will be left behind by a mandatory build ever again (well at least until the next incompatible crypto/packet level change), and the new probe request code will allow us to test a new backtracking/HTL algorithm (which should help significantly on darknets and hybrid darknet/opennets) without breaking the network. Lots of other goodies too (less short packets, possibly less timeout, f2f file transfer...). And it'll be mandatory on tuesday. So go get it!

    One reason for the short mandatory is that work on opennet will start in the very near future... We need opennet, because of topology issues (#freenet-refs sucks), and because we need more users, more developers, more money, more content. And we need it to grow in order to have enough users to be able to build a global darknet during the limited time remaining to us. So watch this space!

    One last thing. On the resources page, you will find a full set of debian AMD64 ISO's (all 21 CD's). Let me know if you manage to fetch them! (Or even one of them...)

    2007/05/25

    another amphibian

    Moving along

    The original plan was to capture a load of data about the network topology and use it to determine what is wrong with the network and whether opennet is The Ultimate Solution™ (or just a necessary evil). In the process we discovered a routing bug, which was probably causing the topology data to be corrupt, as well as various more serious issues, and is fixed in 1033. It will be mandatory on June 5th, so that gives a window to fix some more bugs and implement some long-planned features in the meantime (not necessarily all of these todo items):

    • Binary blobs - migration of your favourite freesites between disconnected darknet without knowing the privkeys. Required for next item. 75% complete as of this update, should be in 1035. Less than 1 day.
    • Update over mandatory - updating your node from your peers, even if your node is so far out of date that it can't route requests to them. Will enable faster mandatory upgrade cycles when we are deploying new routing, load balancing code etc. 1-3 days.
    • Datastore fixes - various bugfixes, but the main thing will be including all the data needed to reconstruct the index in the store files (right now if the index is corrupted we lose all SSK data and the LRU list). 1-2 days.
    • New plugins - since several of our Summer of Code students will be using plugins, and since several third party projects are already using plugins, it's about time to sort out the mess that is the Freenet plugin API!
    • New congestion control - Michael Rogers has built us a new transport layer architecture, thanks to extensive simulations and input from related protocols. This involves a new packet format and new congestion control code. And perhaps even a new load management layer (token passing).
    • Multi-container freesite support - This must be implemented before releasing Freenet 0.7 IMHO. It would greatly improve the performance of larger freesites (both inserting and requesting). tar.bz2 support would improve it even more, but the only java version of tar I know about, Apache Commons Compress, has licensing issues (it's incompatible with both GPL2 and GPL3).
    • Insert memory usage - Early this year a load of work was done on reducing the memory footprint of large requests. Similar changes haven't yet been done for inserts.

    PS: Move your freesites! Build 1035, out Real Soon Now, will by default disable access to pre-1010 insecure SSKs (you can still enable access manually). In a month or two we will drop the back compatibility code which allows these (and old CHKs too) to be stored. What this means is you need to move your freesite now if it is still using an insecure SSK/USK key (one which ends in BAAE rather than CAAE, and for CHKs AAEC* rather than AAIC*). Or move your favourite unmaintained site (suggest coordination via Frost). I have moved the Greater Toad Pictures Stash.

    One last thing: The first 11 CDs of Etch for AMD64 have been inserted on the resources page. Let me know if you manage to fetch any!

    2007/04/28

    :)

    Looking up for once... mostly!

    For once things seem to be looking up. The network is, compared to recent history, zooming along, inserting a CD image in around 2 days, routing is highly specialised even on low bandwidth nodes, link crypto is secure (thanks to STS), we have 6 SoC students and I only have to mentor two of them, and FPI is in reasonable health due to Ian's newfound talent for finding rich donors. The next build will feature a full localisation infrastructure (this is 99% done, I just need to put a few more l10n keys in), so hopefully in the near future Freenet's web interface will work in your native language (contact us if you are a native speaker of a non-english language but also speak good english and want to translate for us, or validate other people's translations). As regards opennet, we are currently collecting data on the network topology, which should tell us whether #freenet-refs is having a major negative impact; if it is, we will commence work on opennet in the near future.

    Now for the bad news. IPRED2 passed the European Parliament. This piece of legislation makes inciting, aiding and abetting, attempting or doing any intellectual property infringement intentionally (not defined) on a commercial scale (loosely defined) a criminal offence subject to full criminal sanctions (prison time, judicial winding up, denial of legal aid etc). In its original form this applied to all "intellectual property", which would probably have criminalised the entire EU software industry; fortunately the passed version excludes patents. Unfortunately, it would probably still make Freenet illegal under incitement or aiding and abetting. It is likely to be some time before this completes the EU process, and once it does it can be up to 18 months for it to be enacted in each state's laws. So we are on borrowed time. If you want to help Freenet out, now's the time!

    In unrelated news, check out Ripple, it's a friend-to-friend darknet-style currency system (there is an old version which is fairly centralised but there is slow progress towards a true f2f system, I've had interesting discussions about routing with Ryan). The basic principle is that money is a system of IOUs: it's information plus trust. So make that system explicit, and instead of trusting the bank all the time, keep it as close to your social network as possible. It would be quite interesting for LETS style hour currencies, but even for hard cash it would be useful. Have a look.

    2006/09/18

    :(

    Crash

    Backups are important. Not only that, but backups of the right files are important! I lost a hard disk... it turns out that my (weekly) backups didn't include my most recent GPG key (but I have the old ones), nor my Frost identity. They do now! On the upside, Servalan (aka amphibian.dyndns.org) now runs Debian Etch on amd64 (the workstation still runs 32-bit, to minimize disruption and because of openoffice and gaming complications). Now, how do you know I am still me?:

    • I still have the keys to this freesite.
    • I still control toad_ on irc.freenode.net (a registered nick with op rights)
    • I have a page about this on amphibian.dyndns.org.
    • I have signed the message announcing this with two older keys which you can find on the public keyservers, and with which I probably signed some messages on the lists in the distant past: here and here.

    My new public key is E43DA450 in my keyring (4096-bit, yay!). The crash message is signed here with the new key. My new Frost ID is toad@zceUWxlSaHLmvEMnbr4RHnVfehA and my new freemail address is toad@freetoad.freemail (long version), although this isn't necessarily fully working/integrated yet; don't send any important correspondance over it yet, but if you want to send me a test message I'd appreciate it.

    Geek computers are like the TARDIS. They are immensely powerful, but they're constantly breaking, because we're constantly tinkering with them! (Well in this case it was a case of abusing commodity hardware; why should a hard disk fail? I know even Seagate drives have a limited lifespan, but they shouldn't! :)).

    On the other hand, last week's holiday (Monday to Thursday) was pretty cool. Me and mum went to see granny in Harrogate. Spent some time with granny, visited some nice green places, got lots of cheap videos from the charity shop, watched all four Harry Potter films, discovered that a cousin of mine is a Red Dwarf fan. Normal service (normal chaos?) will be resumed shortly; most of the TODO list on the last but one item still stands :(.

    PS, here is the text of the Pope's original speech, if anyone cares; it's primarily about religion and rationality, but does contain a subtle attack on Islam even if we accept his apology with respect to the most blunt quote.

    PPS, the below item (Hereticnet) has a Frost board called Hereticnet.

    2006/08/29

    Another amphibian

    Heretic

    There are a number of persecuted groups who would greatly benefit from freenet's technology, but who cannot use it for moral or political reasons. For example, persecuted churches. Even if you are an atheist I hope you accept that freedom of thought, and therefore of religion, is important: You have the right to sincerely believe in the Jesus, Buddha or the Flying Spaghetti Monster, if you want to. Others have the right to ignore you and think you are crazy. So there may be room for a darknet variant which uses a lot of freenet's code, but has different goals. Note that I am saying nothing against Freenet itself: I like Freenet, I am morally happy with it, but I think there may be room for something else as well, once Freenet has reached a reasonable level of stability.

    Such a network would be resistant to external censorship, but provide for internal censorship. In other words, it would be a high standards darknet: A community with its own standards for content, which it could enforce through expulsions and schisms, but which is not necessarily the same as the outside world's standard. On such a network, content inserts would be tagged with a cryptographic structure allowing the insert to be traced back one hop at a time, but only with the consent of (for example) 2/3rds of the nodes surrounding each hop. If somebody found some content they object to, they could file a complaint. This would be discussed on the chat system, and ultimately people on the network would inspect the disputed content (hence the need for a fairly 'high' standard), and decide whether to vote to trace the author, to trace the complainant, or to do nothing. If enough nodes vote to trace the author at each hop, he would be traced. He would then be identified to his direct peers, and everyone else would know his topological position. The network must then decide what to do with him. His direct peers may simply disconnect from him. Or they may choose to protect him, (either after the trace or during it), in which case they themselves may be disconnected from. Irreconcilable differences will have to be dealt with by a larger network split: What was one community is now two.

    This is by no means an easy way out of the conundrum that is freedom of speech. It requires significant effort on the part of the users, and it also requires a fairly high standard; "anything but child porn", for example, is likely to result in permanent brain damage (or at least a need for counselling) to active participants on the network, since disputed content will normally be close to the border between what is allowed and what is not. A persecuted church would have a much higher standard, while most of its content would still be illegal by the local laws. And it is likely that such networks would have major problems with splits and schisms, as any other community does. It would closely represent the underlying community. It seems to me that this would be an interesting experiment, and it might be useful to somebody. Comments? Contact me!

    2006/08/19

    Another frog

    Rolling on

    Probably a good idea to update my freesite once in a blue moon, right? Well there have been lots of goings on in Freenet land... Minor stuff first. Build 950 somewhat experimentally makes the node treat backoff as advisory rather than mandatory, so your node will send requests to backed off nodes if it can't send them to nodes which aren't. This seems reasonable to me, (because we have a separate load limiting system), Anyway it's not a long term solution; the current mechanism has major security issues (flooding and possibly some local attacks too). Tell me what you think of performance on Frost. There have been many other improvements, a good deal of which is due to our volunteers and Google SoC students (students Google pays to hack on freenet over the summer); upgrade! As you've probably heard, John Gilmore (who remarkably enough owns toad.com) has donated $15,000 to Freenet, so I'm definitely going to be paid for the next six months. Obviously this is a good thing!

    One of the big questions in 0.7 now is opennet. Here's the deal. On darknet, you only connect to your friends (in theory; in practice most links are established via #freenet-refs etc between strangers). On opennet, the network assigns you connections. Either way, you are vulnerable to your peers: they can do correlation attacks, and probably lots more fun things. On opennet, you are probably connected to the Bad Guys, because they probably harvest the network (impossible on darknet), pretend to be a thousand nodes (hard on darknet), and connect to everyone they can - preferably more than once under different identities. In other words, Opennet is vulnerable to harvesting (quickly finding large numbers of nodes) and Sybil attacks. For the time being there isn't much we can do about correlation attacks; I have a few ideas which may be implemented soon but they won't provide a very large anonymity set; expect major progress in 0.8, but it's quite likely that premix routing won't work on opennet. So as you can see, we need opennet for bootstrapping: for getting slashdotters onto the network. In the long term, we need as many true darknet connections as possible, because Freenet will be illegal or attacked. One thing which is important is not to jump in too soon; many problems are easier to solve on darknet, which has lower connection churn etc. But we do need opennet, and once we have it we need people to move from all opennet to some opennet and some true darknet connections, and eventually to pure darknet, or the network will be unsustainable. What incentives we can provide is the subject of some debate.

    So when will Freenet be illegal? The DADVSI in France arguably makes it illegal there. We may have a constitutional case, or we may not, but from what I've heard we are in trouble. People who distribute or develop it may be liable to prison time and a €300,000 fine. What's more fun than the DADVSI then? The IPRED2 is a directive going through the European Parliament which makes it a crime, punishable by all manner of criminal sanctions including prison time, judicial winding up, large fines, and lots of other fun things, to intentionally commit, attempt, aid, abet, or incite (!) the violation of any intellectual property on a commercial scale. "On a commercial scale" is undefined, but caselaw suggests it's not a real protection for anything either Freenet or the free software movement does; "intentional" may not require knowledge of the specific IP according to the FFII. But even if it does, knowingly violating a software patent on the grounds that it's a load of nonsense would now become a criminal offence! The application to patents alone will seriously harm the free software movement, of which Freenet is a part, and fighting it on the patents issue is the best way to attack the directive (as we won the fight last year). But the application to copyrights might well ban Freenet EU-wide. Please don't get depressed and cynical about this. Join the FFII, read as much as you can, and write to your MEPs. Last year we fought the proponents of software patents (including IBM, which is either schizophrenic or trying to have its cake and eat it, or both) to a stalemate: All that is required for evil to triumph is that good men do nothing, but there is much that we can do (before we reach the "we must all hang together..." stage!). Here's a letter (HTML) I sent recently (addressed to Lord Sainsbury, although you should generally write to your MEPs on this); please don't send exactly the same letter, but you're more than welcome to use it in writing your own.

    Finally, the government expects all UK broadband ISPs to be blocking child porn sites using Cleanfeed by the end of 2007. They have assured me that it will only be used for this purpose, but I still worry that ISPs will be forced by litigation to block specific copyrighted content (e.g. xenu.org) - certainly they will if the IPRED2 passes. We shall see...

    TODO LIST: Current relatively high priority items:

    • Multi-container freesite support - currently any content in a freesite over the 2MB container limit is inserted separately; multi-container support will help a lot with big sites.
    • Station-to-station protocol - Freenet 0.7's crypto at the moment is vulnerable to both spoofing and MITM.
    • Low level rewrite - Michael Rogers and I have been debating congestion control, bandwidth limiting, load limiting, and a little bit of encryption for some time, while he has been writing simulations of these for his Google SoC project. A lot of this could be put into practice in the near future, although it'll be a while before token passing load limiting is ready (but when it is it will solve lots of problems).
    • Location probe requests - We have some persistent suspicions that location swapping is causing large numbers of nodes to have locations in roughly the same place, and sparse keyspace regions populated only by newbie nodes which don't last long. Location probe requests would let us investigate this, and also get an accurate size estimate for the network.
    • Freemail and SNMP - SNMP has been working for some time, and I want some pretty graphs! Freemail (Dave Baker's Google SoC project) is now entering final testing. Hopefully next time I post I can include a freemail address!
    • Other security issues - There are some nasty security tradeoffs in 0.7 (e.g. whether to cache local requests), I have some ideas for improving security without too much work (the big gains are in 0.8 where we get premix routing etc).
    • General bugfixing etc - Many minor but important features aren't yet implemented, and there are always more bugs...
    • And of course, OPENNET !

    2006/06/02

    Yet another amphibian

    Shadows of shadows

    A freesite with a few links to stuff downloaded from a certain p2p program recently came to my attention. It contains at least one file which is pure copyright infringement, so I won't link it here. But it also contains this file, labelled on the original site "US ARMY 3 - top secret defense - Biological Weapons Technology.pdf". Sorry to burst your bubble, but it's on the web. Just google for "Those delivering BW could be". That gets you to www.fas.org/irp/threat/mctl98-2/p2sec03.pdf. Go up a few levels and you find all sorts of wonderful stuff about WMD and intelligence threats and so on. Another few clicks and you get to stinet.dtic.mil. An amazing amount of this stuff really is just out there, and there is absolutely no shame in pruning through it and finding the good stuff. Or even calling attention to it. I honestly don't know what the legal situation is with this sort of stuff, but frankly I don't care; it's almost certainly covered by copyright exceptions in my country. Have fun, and insert anything you find especially interesting!

    2006/05/03

    Another amphibian

    Welcome to Freenet 0.7!

    I have finally updated my Freesite, two years after the last post - and this time it's on the long-promised Freenet 0.7 alpha darknet! Yay! here is a page about Freenet 0.7's current security or lack thereof, from the wiki. Also, we are looking for student coders to work with us for money over the summer, thanks to Google's summer of code. You will however need to blow your anonymity by visiting the web site to find out about this.

    Naphtala

    The excellent (if small as yet) new index called FreeHoo carries in-depth reviews of Freesites. His second site listed was Naphtala, a site which I have never visited because I believed it to contain child porn. It doesn't. It contains a fair amount of interesting content written by a paedophile, none of which qualifies as pornographic or illegal as far as I can see. He claims at one point to use non-violent child porn so that he doesn't have sex with real girls, while consistently defending his practice of "child love". Definitely worth a read, as Freesites go, but I won't link it here directly for purely paranoid reasons. Obviously I do not endorse his views: any sexual relationship with a child is evil, illegal, a sin, and likely to damage them for life (I go much further than most people here by endorsing the biblical view of sex, which is that it is a good thing, within marriage). However it is interesting to hear from the other side, and I rather pity Naphtala.

    2004/05/28

    an amphibian from KenMan

    Brain Dump - why do I do Freenet?

    I was recently bothered by the whole thing of why I do Freenet, and whether it is compatible with my ethical basis. I include the resulting braindump in case anyone cares. You may find it interesting. I rather hope this won't result in me becoming even more of a cult figure, but perhaps it will spread some useful insight.

    Why do I think it is right for me to do Freenet?

    As an evangelical Christian, I'm not the stereotypical Freenet user. I don't surf the porn sites, I accept an ultimate (although not human) Authority, and so on. I don't define myself as a Libertarian or Anarchist. Many of my brothers and sisters in Christ would probably not approve of what I am doing. So, why do I think it is appropriate for me to do Freenet?

    Firstly, Why is this difficult? Well, firstly, my income comes from working on Freenet. I don't have any real financial commitments to worry about, but it is better that I am able to contribute to the family running costs. And I enjoy giving to both Tear Fund (an explicitly Christian overseas charity operating in Africa, Columbia and other "nice" places; they help people to work their way out of poverty, provide emergency relief, and witness the gospel in both their words and actions) and my local church. Thus I have a vested interest here, which makes objective ethical decisions difficult.

    So, now the meat. I do Freenet because I believe that freedom of speech is, despite not being a biblical concept, something worth fighting for in a democracy. Does this mean I idolize it? If the state has the right to taxes ("give to caesar what is caesar's...") and even to wield the sword (one of Paul's letters), then isn't it legitimate for it to say what people can and can't say? It is only a small step from here to "Isn't it legitimate for the state to say what people can and can't believe? Isn't it within the bounds of statehood for Rome to persecute the early Christians?". If we look at it from this perspective... Rome persecuted the early church, and failed. Badly. The church did not fight Rome (it was not a good idea historically, nor was it compatible with the sermon on the mount). But the Church prevailed, and those martyred (in horrible ways) will be rewarded in heaven for their perseverence.

    So how does this relate to Freenet? Well, Freenet can be used for good or evil; it could be used by Chinese student groups to keep in contact with each other and the West (for example to organize prayer when they are particularly persecuted), by urban Saudi churches to keep in contact with current Western christian literature, by Western whistleblowers to publish things such as the Diebold files, or to publish criticism of the Church of Scientology, by paedophiles to exchange pictures of child abuse, by others to publish racist hate speech, and by yet others to publish lists of people accused (but never proved.. or who have served their time) of paedophilia and related offences, for future ostracisation and lynchings. Surely it is the state's right to prevent the last 3 cases?

    Firstly, even if there was no Freenet, there would still be files that while technically illegal are clearly in the public interest, and should be hosted by as many people as possible. For example, the various sites on the Church of Scientology (read them, they're on Freenet). These are of clear benefit to the kingdom of heaven. The state's regulatory role is trumped by the clear good that is being done in keeping people away from this vile cult.

    I have always thought that what Paul said about the state's right to wield arms, and so on, are guidelines - true in general, but occasionally it may be necessary to go up against the state; to disobey, in extremis to actively work against the state, perhaps using violence as a last resort. The resistance against the Nazis in WWII is one clear example of this. If you have Jews in a cellar, and the gestapo come and ask you about them, it is perfectly okay to lie to them. If a higher cause is being served it may sometimes be necessary to steal or even to kill, in such extreme circumstances. However on the whole, people lie to cover up sin, or to "fit in" (fear of man"), people steal to feed their sinful desires, and people kill for much the same reason.

    So where does that leave us? We have to fend for ourselves, and work out exactly what the costs and benefits are. Even if the state has a legitimate right to suppress lists of people accused of crimes they were never prosecuted for (for obvious reasons), that does not mean that it will be successful. Very often such people are persecuted - often in error - because somebody in the police themselves tipped off the locals, or the paper. Once this information is available, it will spread, because it is believed to be in the public interest. Sometimes perhaps it is. If such data is published on Freenet, and the author is believed (which may be difficult - he needs to earn some trust - but this is certainly not impossible - and in this particular case it may not be hard :( ), then it is difficult for the state to get rid of it. To that extent, our building of Freenet has helped this to happen, even if we do not ourselves approve of such content's publication. Similarly in the case of paedophilia: we hide the people distributing the images, and although the police may well be able to investigate in other ways (e.g. matching the children involved), they are not able to simply trace the distributor, unless he does something stupid to give away his identity (trapping him on a message board may work). They are therefore available to a slightly larger audience at a somewhat reduced risk (although theoretically if they are not accessed they SHOULD fall out of Freenet).

    So we are clearly building something that CAN be abused. As can a kitchen knife, or just about anything else, but more specifically, weaponry. We are building a weapon here, of a sort. Like all weapons it is not tactically neutral; it cannot be used to suppress information. It can be used to prevent such suppression. My intent is that it be used for the positive purposes listed above, and more. But some will inevitably use it for evil. How do I know that the good will outweigh the evil? Lets look at the good first:

    Freenet can probably be shut down relatively easily in its current state by a moderately well funded attacker. For example, a half time Chinese technician (of some skill), with access to the firewall rules, could set up some nodes, harvest some references, and also the seednodes, and block every node found by IP address. However, compare it with what they have now (or did before recent developments): a mailing list on which open proxies are announced. Every 24 hours or less, a new open proxy is found and announced. The government is subscribed to that list, and within 24 hours, the new proxy has been blocked. Using Freenet (or I2P, which has similar attack issues) instead makes the cost to the Chinese government higher. If Freenet gets bigger, it becomes harder for them to block, but it remains reasonably feasible. However, should Freenet routing start to work, we may be able to provide new features that make it substantially more difficult: trusted mesh routing, which would severely limit harvesting in hostile environments, and thus limit the damage from busting one node; new (steganographic) transports, which slow down and make more difficult attacks, and so on. In fact, right now, you can identify all Freenet traffic by some identifier bytes. But we will get rid of this. And even if Freenet is never really THAT useful, there may well be future anonymous P2Ps that are. We will have laid the groundwork, even if they don't use any of our code, through our extensive practical research. Little of it is formally published, but I still think a lot of that knowledge would be out there.

    In terms of impact: revolutions are always (in China anyway) led by the students. Even in China, the students have internet access. Bloodless revolutions have happened on many occasions in the last century. But the people need to know that another world is possible in order to demand change. I do think that such things *ARE* God-blessed. Velvet revolutions as we saw in the Soviet Union, in Serbia, in Georgia, are usually surrounded by a delicate chain of coincidences that is the signature of divine intervention; God *is* interested, and he *does* care. And the stakes are pretty high: there is a good chance of China and the USA having another cold war, quite apart from human rights issues, and the persecution of the Church in China.

    In the West, files such as the Diebold files (leaked info on voting machines with major security issues), the Church of Scientology files (which the Co$ has consistently abused copyright law to pursue, even if they are readily available *NOW*), and so on, demonstrate that freedom of publication is vital to protecting democracy. And democracy has achieved great things for us, even if it is constantly under attack. At its best it can be a great means for change in a positive direction. At its worst, it devolves into plutocracy, as do all other governmental systems. The basic difference is the attitude of the electorate. If they are apathetic, evil triumphs; "All that is required for evil to triumph is that good men do nothing", as a wise american once said. So as always, true progress comes from the renewing of minds; but nonetheless democracy is valuable and worthwhile. And without freedom of speech, democracies devolve, as the public do not know what is going on, even if they DO care.

    In the future, the West may become a very dark place. I cannot know for certain but I suspect we are in the twilight of democracy here. The War on Terror expands, and if it is not over quickly, it risks destroying democracy. If there is another serious attack, we will take another major step into the abyss. The intelligence war may be going well, however, there is plenty that the terrorists could do, and the war for hearts and minds is largely ignored (as we see with the reluctance of the US to bring real pressure to bear on Israel). If it continues (and it may, as there is considerable money to be made on both sides), we are going down the tubes - and this can't easily be fought with the prospect of terrorists crashing jets into nuclear reactors on the horizon. The other big influcence is in cyberspace, which increasingly dominates meatspace as a communications medium. It is quite possible that the little war between open source software and Microsoft et al will come to an end. If Microsoft loses, its stock price will collapse, and this will have substantial effects on the US economy (IIRC Microsoft is 20% of the S&P index, which a lot of pensions are linked to; not sure about the NASDAQ). If Microsoft wins, we will have increasingly unreasonable DRM regimes, and the War on Piracy will start to move. Because there are always bugs, automatic updates of trusted operating systems are inevitable. Because you cannot prevent people from aiming a camera at a TV screen, we may well be looking at a means of remotely deleting files from all trusted PCs everywhere that match a particular checksum. Once this is technically available, even if judicially supervised, it will be abused. Eventually it will be abused for overt political goals, but where there is a blur between copyright and freedom of speech, the former will prevail, as it usually has in the past, and increasingly will in the future as further legislation passes which makes it ever more obvious which side the legislators' bread is buttered on. Freenet will not be the only key piece in this fight; black-market hardware may well be an issue, and there will be many more; and I personally will not be involved, once OSS (or Freenet) is made illegal, because of my prominence beforehand; but this is another area where Freenet and similar technologies may well be of some practical benefit. And the Church will not be unharmed either. Prohibiting radical islam would be politically incorrect, so what is likely to happen would be prohibiting religious intolerance (there are already laws about this, we will have to wait and see how they are interpreted). Sermons and literature on why other religions are wrong, or why homosexuality is wrong, and so on, would become risky. If the WoT continues to escalate, or if there is a regional nuclear war (as the world continues to "tool up"), we may end up with explicit regulation of religion.

    So, in conclusion: I do not know whether Freenet will be a net benefit. I do know that I am writing a tool that can be used for good or evil, that the good it can be used for is considerable, and the evil that it can be used for is there. God, however, knows. So it all comes down to whether God is happy about it. Which is expressed in my conscience; the indwelling Holy Spirit informs a believer's conscience. And I don't really have a problem here, most of the time, despite having a better relationship with God than I had some years ago. It is also expressed practically. I pray to God for success, and I have success, often from unexpected quarters. I have had huge last minute donations; I have been led to enormous bugs by coincidence and hunches after prayer. I have been blessed in many ways. I also honestly believe that the project we are undertaking is massively difficult, and a lot comes down to luck; we do not have the resources for it to be otherwise. I don't believe in luck; I believe in providence. I have met several Christians through Freenet, despite the natural expectation that everyone involved in such a project would be either very shallow or a committed Libertarian atheist. And so I continue. I believe, and trust, that if it is wrong for me to do Freenet, God will reveal this, in time. And even if he does not, or if he makes it abundantly clear and I ignore it for years, I will be forgiven at the end of time, because He has paid all my debts, and gone where I could not return from, and come back, and given me the promise of eternal life, secured by the only true authority in the Universe. Glory be to the Lamb forever and ever!

    2004/01/29

    an amphibianHutton Craziness An interesting puzzle...

    The Hutton Report (Freenet) was published today. I haven't read it (been far too busy with Freenet), but the general gist of it, according to the press, is that the government has been completely exonerated. A lot of other interesting stuff came out during the process of the inquiry, but the conclusion is interesting, given the fact that we still haven't found any WMD in Iraq, and Hutton says the dossier (Freenet) was _not_ 'sexed up', we have a few options:

    1. Hutton is wrong This seems to me to be the most credible option: the government did indeed produce a misleading bunch of factoids, either completely independant of any actual intelligence, or exaggerated and carefully selected to look good. This could be a more subtle long term problem: agencies, individuals, sources that produce good intelligence are rewarded, but there is no way to measure how "good" intelligence is, so you check it against your political goals and preconceptions...
    2. There is a pig flying the plane If Hutton is right, then the intelligence services did indeed conclude that Iraq had Weapons of Mass Destruction, without prompting from the government. If this is the case, given that we haven't found any, we have some more options:
      1. Iraq really does have WMD We just haven't found it yet. This seems increasingly unlikely as time goes on. And if it does, they are highly unlikely to be deployable, since they weren't used in the war, and they haven't been found since despite the disbanding of the Iraqi army and totally unrestricted access for the team of 1,400 US investigators to all sites in Iraq.
      2. Our intelligence services are incompetent If Iraq does not have WMD, but the intelligence services believed impartially that it does, then logically a third world nation must have deceived the CIA, the NSA, GCHQ and SIS, DIS, and all the other three letter agencies involved in many countries. The US intelligence budget is rumoured to have been $28,000,000,000 in the late 90s and is undoubtely much higher now.
      3. Our intelligence services are malicious The other option is that they did not seriously think Iraq had WMD. If they then lied to the government, as opposed to the government lying to us, then they are completely out of control, and presumably had their own geo-political agenda, perhaps something about the 7 million barrels a day Iraq could produce if its oil reserves were properly explored, and what that could do for western economies. Or something less subtle... Of course this doesn't have to be a global conspiracy - presumably if the american agencies wanted to believe in Iraq having WMD, they could persuade the UK agencies of this fiction. Of course, if this is true, sooner or later something may be planted to sate the public...

    The other interesting part is that no British government is going to go to launch a preemptive war on the basis of intelligence for the foreseeable future. This is probably the real payoff here. America loses its biggest ally in the fight against States We Don't Like. How sad. Sidenote: I actually think Iraq is better off for the war for humanitarian reasons. That's not really the issue here though.

    And now for something completely different: Freenet Update Freenet is currently having some major problems. Both branches are running NGRouting, which could probably be improved significantly; m0davis is working on that, when he's around, which is rare. The unstable network probably has 100 nodes, and seems to work remarkably well for new content, with insert/retrieve tests succeeding immediately even at relatively low HTLs, most of the time. Stable however isn't. Stable is estimated to have on the order of 10,000 nodes, as of a few months ago: Iakin ran an ubernode for 4 days and had 16,000 unique IPs contact it. The maximum HTL has been reduced to 10 on the stable network to try to reduce load. At least on stable, nodes are in overload almost all the time, and it's almost all the time caused by bandwidth usage. However, this is expected; we now reject queries based on outbound bandwidth usage, with the result that when we come out of overload, if we get a few queries for files that are actually retrievable (and yes, this does happen), our bandwidth gets used up and we go back into overload. After a somewhat paranoid conversation with an old friend, I discovered a possible attack that could have caused the current symptoms, you can read the thread here, which was fixed in 5064, but not completely resolved, as you can read in the thread. This may be happening a little by accident, but I doubt that it's happening a lot by accident, because the log messages that would show it are relatively rare ("Got really late DataReply" is the obvious one). Anyway, 5064 fixes it, but it also makes probing the network for a given key a bit easier, so there will need to be further action on it. I am currently engaged in implementing a new load balancing system, based on the idea of enforced, explicit maximum request rates, which was originally proposed by Ian. The result should be that the load is moved back to the flooder - if a node makes too many requests, it will get RNFs rather than overwhelming the network. This should help to balance load and heal routing... but we've all heard that before. It also gives us some interesting possibilities w.r.t. fairness - we can change the minimum request interval on a per node basis, so that we can "punish" nodes for bad behaviour (such as the attack above), or accept more requests from nodes whose queries are most likely to be successful. This should not take more than a week to implement, and it will require a network reset. After that, we are going to attempt to set up a "testnet", an expanded form of the old watchme network - a completely separate, non-anonymous network for testing routing, debugging, and so on. Unfortunately for it to be of any use we will need several hundred nodes to run it. But eventually it should give us a much better idea of what is going on.

    2003/07/31

    Rants go here! I replied to jrand0m's last post on the list IIRC, but future ones will be replied to in both places. I'm sure I'll have other things to rant about occasionally.

    Since jrand0m asked nicely, and since I knew he wouldn't mind me borrowing his template, I've decided to set up a rant site of my own.

    I actually got a mail from a Chinese citizen on the support list today. He can't get to the web site, as it is blocked. Hopefully I can send him a ZIP. I still think anyone using Freenet in China is crazier than I am though - but as Ian often points out, what they used before was even worse.

Automatic search for latest edition