One of the more unusual and exciting aspects of Dev8D is the idea of 'bounties', where developers create something during the event in response to a specific challenge. It calls for fast and furious coding, collaboration to draw on others' expertise, and…a lot of caffeine. The coding-collaboration-caffeine combination produces some impressive results.
This year there challenges in ten categories:
- Linked Data API/Data Challenge
- EDINA – The Unlock Places API & Geo/Data Challenge
- Building the best IMS Basic LTI Tool Blackboard / Learning Tools
- Interoperability API/Data Challenge
- Memento: Time Travel for the Web
- Internet Archive API/Data Challenge
- Mobile API/Data Challenge
- Microsoft Zentity Challenge
- EPrints 3.2 API/data challenge
- MLA Challenge
There were entries in all categories – read on for all the ideas and to find out who won.
Category: Edina
Winner: Embedded GIS-lite Reporting Widget
Duncan Davidson
“Adding data tables to content management systems and spreadsheet software packages is a fairly simple process, but statistics are easier to understand when the data is visual.
Our widget takes geographic data – in this instance data on Scottish councils – passes it through Edina’s API and then produces coordinates which are mapped onto Google. The end result is an annotated map which makes the data easier to access.”
Second place: Geoprints
Marcus Ramsden
“Geoprints is a plugin for EPrints. You can upload a pdf, Word document or Powerpoint file, and it will extract the plain text and send it to the Edina API.
The API will pull out the locations from that data and send it to the database. Those locations will then be plotted onto a map, which is a better interface for exploring documents.”
Category: Memento
Winner: FireBack: time travel for Firefox
Sam Adams, University of Cambridge
“Fireback is a firefox extension that allows you to pick a date and then browse the web as if it was that date. Get a FireBack toolbar in Firebox, enter a date and then from that point on it uses the Memento / Internet Archive data and you are browsing archives that o back to around 1998.
It's a fun thing to explore and see how websites have changed over time. It's another way of accessing data from these historical views into the web. It's an easy way to use this data – open your normal web browser and simply dial a date.”
Second place: Pulse
Mark MacGillivray and Richard Jones
“We’re making an API that accesses the historical views of the page. It looks at the changes that have been recorded on that page, measures those changes, and then gives you a value of how dynamic that page is.
RSS is good but it doesn’t let you know when the text on a page is updated. This API would make an analysis of the page and give you an index value of changes. It could even make recommendations on how often you should review a page, based on how frequently it has changed in the past.
You could use the API to help you keep track of websites you visit regularly – perhaps suggesting when your favourite sites are worth checking for new content – or it could be useful if you run a large website and need to keep tabs on what changes regularly.
This is a proxy API – you could build it into other apps. It could be used as a browser plugin, for example, or on a browser speed-dial page.”
Third place: Newsline
Adrian Mouat, University of Edinburgh
“Newsline uses Memento and the Internet Archive to allow people to connect to the BBC website and find old versions of the news homepage with a keyword search.
It means you can search for a topic, such as Olympic, or Hurricane Katrina, find all the different historical BBC news frontpages over a set time period and track how the story evolved over time.”
Category: Internet Archive
Winner: Techtales
Mike Jewell, Goldsmiths and Dave Challis, Southampton
“We're using the Internet Archive to get copies of old web pages and then running the downloaded pages through an analysis application to see how things like the technology used in them has changed over the years. So we can see, for example, when they started to use stylesheets, or bold tags or alt text for images, and see when things went in and out of fashion.
It makes it possible to check how the accessibility of a site has changed over time, or the colours of a site, and then plot this on a chart, using Google Chart api, so that it is easy to visualise. You can also compare two websites to view, for example, how the accessibility of the Oxford and the Imperial websites compares over a five or Farmacii online care vand pastile Levitra Original: modalitati de comanda ten year period.
It's extensible so you can easily add new things to look up, whether it's the number of words on a page or the length of words used or the state of the spelling and grammar.”
Second place: Flux
Ross McFarlane
“This is a timemachine for web browsing that works with events rather than just dates. So, for example, you can use it to find the version of the BBC News website that displayed the day that Michael Jackson died, without actually knowing what that date was.
The search query for ‘Michael Jackson death date’ will be sent to Wolfram Alpha, which will guess the date. The ORE timemap then uses this date to search the website, find the closest match and redirect you to that page.”
Third place: Chris Gutteridge
“This is a plugin that works on the Internet Archive and Wayback machine, allowing different iterations of a website’s history to fade into one another. It could also be used with video to improve the visualisation of a website’s history.”
Runner up: WIFF
Mark Scott, Uni of Southampton
“With the Internet Archive you can find out what a website was like at a particular time and you can get the text from different points in time and see how different they were.
With WIFF you can pick a website, choose monthly or yearly and then how far back and it will get that webpage for those points in time and then plot on a graph how many changes had been made to it.”
Category: Microsoft Zentity
Winner: Zentity RESTful Zentity mashup
Martin Evans
“This is a front-end to the server, which enables people to access data from another server via a web request. The system will return JSON data.
This plugin will be useful to repository managers and system developers – they’ll be able to consume the data from Zentity without even knowing it’s there. They’ll just know that if they run the commands the data will appear.”
Category: IMS / Blackboard
Winner: Wookie BaLTI
Daniel Hagon, Science and Technology Facilities Council (STFC) and Mark Johnson, Taunton's College
“Wookie allows you to embed W3C widgets in any vle pages. We used Wave Gadget framework to make a widget collaborative and used the LTI interface to put it in a Moodle page so you can now have real time collaboration between Moodle users. It also plugs easily into other VLEs like Blackboard and Sakai because Wookie allows Basic LTI to interact with it. In this example we've used a molecule viewer – multiple people will be able to work on this 3D object together as each change is shown on the other's viewer.
We wanted to see how easy it would be to transfer from a Google Wave development setting into a Wookie development setting. It is an attempt to do what you can with Google Wave gadgets but in a less proprietory way and in contexts other than Google Wave.”
Second place: List8D Moodlefication
Steve Coppin, Ben Charlton
“We’ve taken List8D into Moodle. In fact we’ve actually done more than that – because we’ve created an LTI plugin it can be imported into other Virtual Learning Environments (VLEs) such as BlackBoard.
List8D is a reading list system which links into the library management system. Back at the University of Kent we were already planning to create a List8D plugin for Moodle, which would have taken us two weeks. But we’ve actually been able to do it here at Dev8D in two days, and we’ve done it better as it can plug into other VLEs. That’s definitely saved us time in the long-term.”
Third place: MuCoMaCo (Museum Collection Made Cool)
Sander Van Der Waal, developer, OSS Watch
“I took photos from five different collections, both the thumbnails and the high res images, along with an Excel sheet with the names and postcodes of the museums they were from. I geo-loated the museums on an open source Streetmap map. The result is that it displays the name of the museum and a link to the website with a sample of pictures from the collection of the museum that is being geo-located displayed on the side of the map. Every five seconds it automatically goes to the next one, and you can click on the image to see the larger version.
Part of the MLA challenge to make the data available in a way that would be attractive to young people. This is a widget that could be used in a vle like Moodle or Sakai. Students can see this widget in a sidebar and click through and see what's out there. It brings data to them rather than making them go to the site but, if it attracts their attention and interest, they can click through to the website.”
Category: Mobile Challenge
Joint winners:
Sam Easterby-Smith and Chris Gutteridge
Dave Tarrant
Click the name to find out more about these winning entries.
Third place: Virtual Button
Steven Johnston
“QR codes are one-way buttons: you take a picture of the QR code with your phone, and it takes you to a website. However, with the virtual button you could send information via the QR code too.
For example, maybe you need to prove that you’re in a particular location. There could be a QR code in that location that was embedded with the request ‘send me your location’. So when you took a picture of the code with your phone, your phone would receive that request. If you agreed to the request, your phone could then send its GPS location to the requesting website, thereby proving your location.”
Category: Linked Data
Winner: Shredded Tweet
Mark Borkum, School of Chemistry, University of Southampton
“Shredded Tweet takes Twitter search results and enhances them with linked data. It makes the results more usable via more applications and with a wide variety of use cases and services.
Twitter provides XML, HTML and JSON outputs which is useful but there is still a lot of work the client programmer has to do to extract the information they need, whether its detecting hash tags to find out trends, or looking for @ signs to find out who the tweets are aimed at. Shredded Tweet does the work to provide that information.
This application doesn’t take anything away – it just adds linked data. It pulls out the tags and urls so you could run this through any linked data-aware tool and would be able to find all of the trends and people mentioned in tweets.
It will be very useful for a librarian and archival point of view to be able to record small urls, resolve them and store them because in two or three years time, the url-shortening services people use for Twitter, like Bitly, might not exist so we need to know the full url so that we know what people were talking about.
It can also use the RDF highlight tool that puts red boxes around the linked data so, for example, the International Union of Chemistry has an identifier for chemicals and Google can pick these up in documents and can find all the documents on the web that talk about particular substances. This tool can now find any tweet that mentions a particular substance.”
Second place: Jim Downing
“How linked is each LD endpoint? A dashboard showing how many of the HTTP URIs used in a SPARQL endpoint are Linked Data according to the rules of Linked Data club. This is a rough (but effective) metric of how usable the data will be. It's easy to just encode a load of RDF and bung it in a triplestore, and expose a SPARQL endpoint to it – it's just not quite as useful as Linked data. Linked data makes semantics much easier and more tractable to use, semantic URLs are miles more useful than URIs that don't. The code hoovers URIs from an endpoint, and analyses each one by sending HEAD requests, and using content negotiation to find semantic data. At the moment, the software simply displays the data as a table and a pie chart.”
Third place: Richard Palmer
“This is a way of visualising SPARQL with venn diagrams. It could be used as a learning tool – a way to illustrate queries when writing with SPARQL.”
Runner-up: Chris Gutteridge
“Data.gov has lots of great data but RDF and SPARQL can be intimidating for people who aren’t familiar with them.
I’m creating a beginner’s toolkit to help people start using RDF. It’ll get them using the data in a quick and easy way, to prove to them that it works. Examples are available here: http://lemur.ecs.soton.ac.uk/~cjg/Graphite/examples/gov1.php ”
Category: MLA
Winner: MuCoMaCo (Museum Collection Made Cool)
Sander Van Der Waal, developer, OSS Watch
“I took photos from five different collections, both the thumbnails and the high res images, along with an Excel sheet with the names and postcodes of the museums they were from. I geo-loated the museums on an open source Streetmap map. The result is that it displays the name of the museum and a link to the website with a sample of pictures from the collection of the museum that is being geo-located displayed on the side of the map. Every five seconds it automatically goes to the next one, and you can click on the image to see the larger version.
Part of the MLA challenge to make the data available in a way that would be attractive to young people. This is a widget that could be used in a vle like Moodle or Sakai. Students can see this widget in a sidebar and click through and see what's out there. It brings data to them rather than making them go to the site but, if it attracts their attention and interest, they can click through to the website.”
Second place: Sam Easterby-Smith and Chris Gutteridge
“We’re taking the museum’s photographs and adding geographical data to them using Google Maps. So users will be able to look at a map of the UK and see where these items are located.
Where there are several item in one location – maybe 50 photographs in one museum – one photo will be selected at random to display. So individual photos can be used to generate interest in the whole collection.”
Category: E-prints 3.2 challenge
Winner: PDF Metadata Extraction and Social Tagging
John Harrison, research associate, University of Liverpool
“This offers a quick way to analyse things like size of fonts, colour or bold tags within a pdf and using that to extract chapter or section headings. This means that it can identify titles and authors.
It will also pull out the references section by looking for the section that follows the header that says references. It takes the text from that and pulls it out. It can use Open Calais to submit some of the data from the text and use it to generate social tags to apply to the document.
It will be useful for adapting into a plug in for e-prints as e-prints has a huge number of pdfs that have been submitted and not tagged or catalogued properly. It allows automatic citation analysis and so can link references directly to the resources mentioned which will speed up the process of tracking down references.”
Pingback: HotStuff 2.0 » Blog Archive » Word of the Day: “unlock”