I've just finished compiling a list of places where we duplicate information. It struck me that software engineering is like any other engineering, you can't have it all. You can't design a car that is fast, cheap and reliable. You can pick any two of the three and design a car to fit those criteria.
In my previous job in an electronics firm the engineers had to explain to the management why we couldn't produce products in a short space of time, that were powerful and yet cheap. "Pick any two", we'd say, "and will manage those".
In our information systems we'd like them to be integrated, handle complicated information and be easy to update. One of the places where we duplicate information is where we've previously chosen integrated and complicated whilst having to put up with not easy to update. However the ease of update is has become more important so what we've sacrificed instead is the integration - so the data is partly duplicated in a custom system that handles the complicated data nicely.
In another place we have data entered by hand which exists elsewhere in a spreadsheet, so we can do a quick mail merge to get the data out in several ways. In that case we've sacrificed integration because the effort to type it in was easier than the gymnastics in getting the data out in those several ways.
Fortunately the list of duplicated data is quite short because the data is mostly fairly simple and so can live in the integrated system (Raiser's Edge).
I have my fingers in many pies: IT/techie/charity/non profit/nptech/mission stuff. Founded 2004
Many Pies

Tuesday, December 22, 2009
Wednesday, December 16, 2009
Pies and Bible Translation Statistics
How could a blog with pies in the title resist linking to a blog post about pies and Bible Translation - with pictures!
Labels:
wycliffe
Tuesday, December 15, 2009
A review of "The Website Owners Manual"
My review of The Website Owners Manual:
This isn't a manual for web designers, it's for those people who are responsible for a website in any way. It covers pretty much everything non-technical you need to know - setting up a project, overseeing direction and design, testing, launching and monitoring. From comments I've heard Paul Boag make, as a web designer he wrote this for his clients, so he wouldn't have to keep on answering the same questions over and over!
Each chapter has a helpful introduction, and at the end some actions points as to what to do next. This is useful as there's so much information in each chapter that it can seem a bit overwhelming, especially if you've been landed with the job of managing a website without much previous experience. I really can't find much to criticise in this book - it has a really wide coverage of most of the things you need to know about.
This isn't a manual for web designers, it's for those people who are responsible for a website in any way. It covers pretty much everything non-technical you need to know - setting up a project, overseeing direction and design, testing, launching and monitoring. From comments I've heard Paul Boag make, as a web designer he wrote this for his clients, so he wouldn't have to keep on answering the same questions over and over!
Each chapter has a helpful introduction, and at the end some actions points as to what to do next. This is useful as there's so much information in each chapter that it can seem a bit overwhelming, especially if you've been landed with the job of managing a website without much previous experience. I really can't find much to criticise in this book - it has a really wide coverage of most of the things you need to know about.
Thursday, December 03, 2009
Digmission - first post
Two days ago I went with a couple of colleagues to Digimission (recordings) which aimed to "explore how technology shapes faith, church and mission". I can't find the quote that got me interested, but it was something along the lines of the subtitle for the book that was given free to early bookers: "Flickering Pixels - How Technology Shapes your Faith". That was one of the themes of the day - the message is affected by the medium it is transmitted through.
The people there were a diverse group - from a variety of organisations, or leaders of churches. One of them was Jonny Baker, whose blog I have been reading for a few years now. I'm looking forward to seeing the powerpoints because the only thing I wrote down from his talk was the phrase "mainframe Christianity".
There was a plug for Faith Journeys from Christian Research (not sure which is their website) which is interesting. It's built upon a platform used by major companies to research what people think of their products. It gives people a chance to store, possibly just for their own benefit, stories about their faith journey. However it also gives Christian Research a chance to ask questions about their faith to answer questions about how gradual the process is, what age milestones happen at etc.
Someone showed a YouTube video of Ricky Gervais talking about the Bible on his Animals tour. It has adult language at one point, so I won't link to it, but he talks about Genesis in a very fresh way.
I'd like to draw some conclusions, but the thoughts are still rattling round in my brain, so I'll probably do that in another post.
Links to other articles can be found on twitter: #digimission
Three of the speakers on the panel discussion:
Mark Meynell, Maggi Dawn, Jonny Baker
Labels:
web
Thursday, November 26, 2009
Lessons learned from a book collaboration
A new book is out: Social by Social, "A practical guide to using new technologies to deliver social impact". It's available as a free PDF download as well as a non-free dead tree version.
At a quick glance it seems full of useful stuff, along with some familiar things if you've followed blogs with the word "social" in the title.
The bit that grabbed my attention was the stuff in Chapter 9 about the making of the book. As someone whose job it is to make sure people can use the tools provided, some things struck me about their choices and the problems they had. (It's good to see such honesty in the first place too!)
I'm not surprised they ended up using email. Even though it's not very good for collaboration, there's no obvious replacement (I wonder how they would have got in with Google wave?). Wikis are very text orientated, so you can see why they wanted to use Word for layout. But multiple copies of a document with track changes is still a bit clumsy. There must be a need for a good tool to do that sort of thing.
It's worth reading that chapter to hear what the other three major mistakes were.
At a quick glance it seems full of useful stuff, along with some familiar things if you've followed blogs with the word "social" in the title.
The bit that grabbed my attention was the stuff in Chapter 9 about the making of the book. As someone whose job it is to make sure people can use the tools provided, some things struck me about their choices and the problems they had. (It's good to see such honesty in the first place too!)
The group was, as a whole, pretty technology savvy – or at least that was the assumption. This assumption led to the first major mistake [emphasis theirs]: not enough thorough evaluation of each participant’s level of social media competency and experience.
...
We ended up defaulting to e-mail quite quickly for two reasons:
firstly, because everyone was definitely using it; and secondly because we trusted it to give us our own record of what had been said that we knew we could rely on.
...
The project wiki was useful for collating content together, but it became cumbersome and ineffective for editing the final document together: it was too text-focussed and wasn’t useful for showing layout and graphics to the designer, and also it wasn’t appropriate for delivering to the client at NESTA and inviting formal feedback and signoff. We ended up collating the final handbook in Microsoft Word and using e-mail and tracked changes – which worked very efficiently but broke our collaborative approach in favour of getting the job done.
I'm not surprised they ended up using email. Even though it's not very good for collaboration, there's no obvious replacement (I wonder how they would have got in with Google wave?). Wikis are very text orientated, so you can see why they wanted to use Word for layout. But multiple copies of a document with track changes is still a bit clumsy. There must be a need for a good tool to do that sort of thing.
It's worth reading that chapter to hear what the other three major mistakes were.
Wednesday, November 25, 2009
Bug fix to blog post - a meta post
It's not often that you write a blog post to report a bug fix, but I put one on the Wycliffe Bible Translators UK blog recently. The bug was amusing, but probably not many other bugs I found would be even vaguely interesting to anyone.
Tuesday, October 20, 2009
Weather forecast via twitter - wycombeweather
Ages ago I thought it would be handy to have a local weather forecast delivered to my phone, for free. Then twitter came along and it looked like that might provide a possibility. By this time I'd come across the BBC weather RSS feeds.
I tried a couple of "RSS to twitter" services and both worked once and then never again. Google app engine looked like a good way of finding a server to do the stuff to join RSS to twitter. So I cobbled together bits of Python code and came up with this. (Update: September 2010 - updated to use oauth library. You'll need to register your app via dev.twitter.com to get the four keys below. )
(Paste in code from feedparser.org. Comment out the main program stuff.)(Paste in outh stuff from http://github.com/mikeknapp/AppEngine-OAuth-Library/blob/master/oauth.py)
You can see the results at twitter.com/wycombeweather and wycombeweather.appspot.com. If you want to do it for your local UK weather you'll need to change the figure 2111 above and use your own twitter account.
# Cobbled together from# http://highscalability.com/using-google-appengine-little-micro-scalability# http://pydanny.blogspot.com/2008/04/feedparser-does-not-work-with-google.htmlimport wsgiref.handlersimport urllib
from google.appengine.api import urlfetch
import base64
import feedparser
import StringIO
from google.appengine.ext import webapp
def getWeather():
content = urlfetch.fetch("http://feeds.bbc.co.uk/weather/feeds/rss/5day/id/2111.xml").content
d = feedparser.parse(StringIO.StringIO(content))
if d.bozo == 1:
raise Exception("Can not parse given URL.")
return d['entries'][0]['title']
class WeatherText(webapp.RequestHandler):
def get(self):
self.response.headers['Content-Type'] = 'text/html'
self.response.out.write(getWeather())
self.response.out.write('
supported by backstage.bbc.co.uk')
class UpdateWeather(webapp.RequestHandler):
def get(self):
self.response.headers['Content-Type'] = 'text/plain'
message = getWeather()
# self.response.out.write(d['entries'][0]['title'] )
# message = datetime.now().ctime()
payload= {'status' : message,}
# payload= urllib.urlencode(payload, True) Removed when switching to oauth client
# Get rid of degree marks because they turn out as question marks in the final tweet
# payload = payload.replace('%3F','') degree marks work on twitter, appear as "deg" in txt, still wrong when main URL viewed
# self.response.out.write(payload)
# Your application Twitter application ("consumer") key and secret.
# You'll need to register an application on Twitter first to get this
# information: http://www.twitter.com/oauth
application_key = "im_not_telling_you"
application_secret = "nor_this"
# Get these from http://dev.twitter.com/apps/your_app_number/my_token
user_token = "this_is_a_secret"
user_secret = "this_is_definitely_a_secret"
# In the real world, you'd want to edit this callback URL to point to your
# production server. This is where the user is sent to after they have
# authenticated with Twitter.
callback_url = "%s/verify" % self.request.host_url
client = TwitterClient(application_key, application_secret, callback_url)
result = client.make_request("http://api.twitter.com/1/statuses/update.xml", token=user_token, secret=user_secret, additional_params=payload, protected=False, method=urlfetch.POST)
# Removed when oauth implemented
# base64string = base64.encodestring('%s:%s' % (login, password))[:-1]
# headers = {'Authorization': "Basic %s" % base64string}
#
# url = "http://twitter.com/statuses/update.xml"
# result = urlfetch.fetch(url, payload=payload, method=urlfetch.POST, headers=headers)
#
self.response.out.write(result.content)
def main():
application = webapp.WSGIApplication([('/', WeatherText),
('/updateweather',UpdateWeather)],
debug=True)
wsgiref.handlers.CGIHandler().run(application)
if __name__ == '__main__':
main()
Subscribe to:
Posts (Atom)