Sensor trip ideas

I’ve just put in a Seeed order for some more Grove sensors (because they’re very easy to set up and use). Here’s what I’m thinking of doing with them:

  • Collect the usual environmental data – air quality, dust, temp/humid, water, alcohol – and compare against readings from the same sensor set that’s in the Brck office.
  • Wildlife detection using motion sensor, mike, and camera unit.
  • Investigate fire/smoke effects using smoke and gas sensors; I’m more interested in effects of different types of cooking fuels in houses, but could adapt this for camping too (a group I belonged to lost a member to fumes in their tent, so this is of value to me).
  • Human monitoring using alcohol detector and galvanic skin response (e.g. sweat density) monitor.

I’m also thinking about what sort of UAV or balloon data would be useful from a short trip.

  • Even 10 minutes of data would be good – especially if it’s a) messy (e.g. oblique), and b) contains features like buildings
  • Although 10 minutes of nice clean stabilised downward-looking is also a good thing to have: garbage in, garbage out really does apply to aerial images.

Whither crisismapping?

[Cross-posted from OpenCrisis.org]

Crisismapping has never been just about Twitter feeds; it’s always been about data.  But what data, and how do we know what’s useful?  I’ve been looking back over 4 years of archived data to start answering that one. 

In truth, I’ve been having a bit of an identity crisis.  I see all the “big data” work on social media feeds, and although I can swing an AWS instance and the NLTK toolkit like a data nerd, for me personally, that’s not where the value of crisismapping has been.

It’s been about the useful, actionable data, and about connecting the people who have it with the people who need it. And whilst some of that data lies hidden in Twitter streams and Facebook requests, most of it is already on people’s servers and hard drives, often in formats that can’t be combined or understood easily.

So, some first things that make a difference every time:

  • Rolodexes: knowing which response groups to follow, and who’s likely to bring what helps.  3Ws are part of this – but before the 3W (who’s doing what where) is the “who’s”.
  • GIS data: knowing where medical facilities, schools, roads, bridges are makes a difference.   Knowing what communications is available is important, so also knowing where cell towers are helps, but might be too coarse-grained: using signal maps to know which areas have cell coverage is often more useful.  For me, mapping cell towers is problematic for the same reason that mapping military bases is problematic: they’re both potential sources of help in a crisis, but they’re both critical infrastructure whose locations are potentially sensitive information.  But many maps include them (e.g. open signal map).
  • Demographics.  Very useful data, but finding even population counts at sub-country levels can be difficult.  They’re usually there (except perhaps in countries like DR Congo where surveying is difficult) but finding the “there” can be hard.   I’d add technology and social media use to demographics, because there’s no point sniffing Twitter if only 0.5% of the country (and mainly expats) use it – there used to be sites available that listed, e.g. Facebook, Twitter etc percentages in each country, but they all seem to be behind paywalls now.

After that, it’s the emerging data: the 3Ws, the situation reports (both official, via news sources and on social media), the field notes about what’s happening.

We also now have 4 years of historical crisis data collected and collated by volunteers, often in areas prone to repeated crises, on top of the data already available through organisations and groups that existed before crisismapping was a “thing”.  I’m not entirely sure what the value of that data is to the next crisis (like wars, every crisis is subtlely different), but it’s certainly worth working that out.

Starting with sensors

[cross-posted from sensornews.com]

sensors at the iHub Kenya

I’m at the iHub Nairobi today with a bunch of sensors (thanks for the loan, Brck team!), because some of the Kenyan ideas for today’s Space Apps Challenge projects are sensor-based.  Those projects didn’t happen, but I’ve been having some very interesting chats with people about the hardware we have here, about their own use of hardware, and about why coders aren’t including hardware in their projects.

Aside from utility (not every project needs sensors, just as not every project needs a web interface), the two big blocks appear to be unfamiliarity and fear.  First, the fear: generally that using hardware will be hard to learn, or that you’ll break equipment irreparably.  And the familiarity: coders are used to software, and hardware can seem very different to software, at first.

The fear: I suffered from these fears too, as I got back into hardware.  That combination of “oh grief I’m going to break it” combined with “but what could I possibly do that’s useful with this stuff”.   The answer is really quite simple.  Buy some kits, start putting them together, and learn from that a) what works and doesn’t, b) why, how, when hardware is useful , and c) how to make your designs more useful.  Yes, things break; no, the outputs aren’t always perfect, but the point of using kits is to learn, and fast.  If that sounds familiar, it’s because it’s the same ethos that drives agile software development: build, fail, learn, build better until it fits what you need.

And speaking of familiarity: electronics is really coding with things.  You have basic units (components), basic things you can do with them (connect together, run voltage through them, read outputs from them etc), that you design together into a working system.  And if you’re using microprocessor-based kits (Arduino, RaspberryPi etc), then you really *are* coding, because you’re programming the microprocessor to send signals, respond to data inputs etc.

I’m doing this the easy way.  I’m starting with components that slot together and have code that’s already written for them: the grove sensor series.   The kit you need to start getting sensor results with these is:

Erm, that’s it.   You’ll need to download the basic Arduino software, and find the example code for your sensor in somewhere like the Seeed wiki, but  for about $45, you too can start experimenting with sensor data. I’ve just left a stack of Grove sensors and a couple of Arduino Unos at the Brck Nairobi office; I’m using the same stack in New Jersey so we can compare results and ideas.   We’ve already got data from this experiment – proving that the Brck office is dustier than the Ushahidi office nextdoor isn’t a great leap forward in knowledge, but taking these sensors out into the field, and getting comparable data from places without good coverage from ‘official’ air quality monitors is.

The journey from here involves lots of placing sensors and learning how they fail, what they do under stress, and what their limitations are.  It’s also, ultimately, to start reducing the number, size and price of components needed to produce usable and useful sensor data, to learn from pioneer communities like Public Laboratory and RiverKeeper, and to make it just ‘normal’ to include sensors in system designs and ‘normal’ to plug them into existing equipment like Brck and mobile phones (I have a Geiger counter that plugs into my phone’s audio port – I’d love to see more of that sort of reuse out there).

And now, back to thinking about questions like “could you build a gas sensor into your clothes”.  I just happen to have an MQ-5 gas sensor in front of me, and am thinking about what it would take to get from there to an alarm ringing on my phone…

Maps of Maps

[Cross-posted from OpenCrisis.org]

I amused myself last night by answering one of my burning questions, namely “can I make a better list of crisis maps out of all the partial lists I have lying around”.

Here’s the original map of Ushahidis:

Here’s a copy of the draft results (i you want edit access to the real thing, just ask… and blame the spammers for this – they’re even targetting maps now) – my other unanswered questions include whether there’s been a drop-off, rise or steady number of new maps, and how the categories lists have changed over the past few years (I’ll put the scraper for that into github).

And here, for balance, are some Esri crisis maps. Because I just downloaded their WordPress plugin, and it’s, kinda, playtime.

Processing all teh Files in Directory

[Cross-posted from ICanHazDataScience]

Okay. So we’ve talked a bit about getting set up in Python, and about how to read in different types of file (cool visualisation tools, streams, pdfs, apis and webpages next, promise!).  But what if you’ve got a whole directory of files and no handy way to read them all into your program.

Well, actually, you do.  It’s this:

import globimport os

 

datadir = “dir1/dir2”

csvfiles = glob.glob(os.path.join(datadir, ‘*.csv’))

 

for infile_fullname in csvfiles:

filename = infile_fullname[len(datadir)+1:]

print(filename)

That’s it.”os.path.join” sticks your directory name (“dir1/dir2″) to the filetype you’re looking for (“*.csv” here to pull in all the CSV files in the directory, but you could ask for anything, like “a*.*” for files starting with the letter “a”, “*.xls” for excel files, etc etc).  “glob.glob(filepath)” uses the glob library to get the names of all the files in the directory.  And “infile_fullname[len(datadir)+1:] ” gives you just the names of the files, without the directory name attached.

And at this point, you can use those filenames to open the files, and do whatever it was that you wanted to do to them all.  Have fun!

Sensor Shopping

[Cross-posted from Sensornews.com]

shoppinglist

Here’s the list of items that should arrive at home soon: a basic sensor set to supplement the Geiger counter, spectroscopes, cameras, microcontrollers (Arduino and RaspberryPi), accelerometers, temperature, IR and range sensors in my toolkit.

I’m most excited about the dust sensor because it’s it was a component in Matt Schroyer’s DustDuino sensor (as seen on PublicLab.org) that’s being trialled in the developing world by Internews’ Earth Journalism Network… will be interesting to see what I can get out of it.

Sensors in Weather Reporting

“We all know that the weather with which the barometer sympathises, is considered to consist of three independent variables – the velocity of the wind, its temperature, and its dampness. It is a question how far the direction of the wind need be reckoned as a fourth distinct influence” – Francis Galton (first weather reporter)  [Galton1870]

A Little History

Weather predictions date back millennia, to at least 4th Century BC Babylonians, and recorded weather measurement, on which forecasts are made, date back hundreds of years, to the Central England Temperature series, which was collected by amateurs and has continued to be recorded since 1659 [Saner 2007].

first weather report

Figure 1 First Weather Report, 1875

early weather symbolsFigure 2 1861 Weather Report with Symbols

Weather reporting in the media dates back to 1875 with Francis Galton’s weather observation maps in The Times (above, with Galton’s 1861 map using symbols); radio broadcasts of weather information started in 1916 at the University of Wisconsin-Madison’s 9XM studio, with UK radio broadcasts of weather information  in 1922 (British Broadcasting Company), and commercial US radio reporting of weather forecasts in 1923 (Edward Rideout’s reports on WEEI Boston;  [Kutler 2003]). Television weather reporting started in 1941 at WNBT-TV, New York [Monmonier 2000].

The sensors and platforms used in weather reporting vary from manual reading and reporting of simple sensors in Stevenson screens, to digital instruments, weather balloons and satellite-mounted Doppler radars.  Outputs from these sensors are usually gathered by national, local or global meteorological organisations (e.g. NOAA, or the World Meteorological Organisation); these outputs and/or detailed analysis of them, including weather forecasts, are passed to media outlets for use in weather reports.

Early US television weather reports (e.g. Weather Man in the 1950s) were simple textual descriptions (e.g. “Sunny, chance of showers”) without maps.  The first use of satellites in weather reporting was in 1960, when the TIROS-1 was launched to send back cloud cover images of Earth from two television cameras (one high-resolution, one low-resolution); later (1960-1965) TIROS satellites included radiometers (measuring infrared radiation) with missions including detecting cloud cover during hurricane seasons and detecting snow cover; news stories arising from their use included the early detection of Hurricane Esther in 1961, and the first complete view of the world’s cloud cover in 1965. The camera resolution of the last TIROS satellite launched was 2 miles at the camera centre (the area covered by a camera pixel is typically larger at the camera image’s edges than at its centre), with each image covering 640,000 square miles [NASA 2013].

What’s Available Online Today

Weather reporting using satellite radar pictures and outputs from weather stations are now commonplace.  The radar spectrum has several ‘notches’ where radar waves are absorbed by water molecules, making storm clouds easier to see; weather stations give temperature, rainfall, windspeed, etc.

The availability of personal weather stations (including wifi-enabled personal weather stations) has made it much easier for weather-based communities to form.  Examples of grassroots communities dedicated to sharing local weather reports include:

As personal weather stations become increasingly automated and include more sensor types, this trend for micro-weather reporting and its potential to fill in data gaps in macro reporting in most likely to continue.

References

What is a Sensor?

“Sensor. Noun. A device that detects or measures a physical property and records, indicates, or otherwise responds to it.” – Google

A sensor detects physical variations in the world, e.g. light, temperature, radiowaves, sound, magnetic fields, vibration, particles (e.g. pollution, radiation) or objects (e.g. water droplets).

Humans contain Sensors

Humans and other creatures contain sensors: eyes, ears, nose, tongue, skin.  Humans are very good general-purpose sensors:

These sensors are sometimes used in sensor journalism, but so too are manmade sensors like cameras, motion detectors and thermometers.

In many situations, manmade sensors are more appropriate:  humans tend to fail the “dull, dirty, dangerous” test: their attention wanders on boring tasks, and it’s not fair to put them into dirty or dangerous situations where a manmade sensor would be more appropriate; they also can’t detect much of the physical world -e.g. radiowaves – without help, and their outputs aren’t always accurate enough for the task in hand.

Manmade Sensors are Old News

Manmade sensors convert common physical quantities (light, temperature etc) into measurements, actions or stimuli (sounds, light etc).  Manmade sensors have been designed and used for centuries, including:

Low-Tech Still Works

One area of journalism where sensors are commonly used is weather reporting.  At the low-tech end of modern weather recording is a Stevenson screen with manually-read instruments.  A Stevenson screen is a wooden box designed in 1864 to shelter a thermometer from direct weather (rain, snow, wind) and other objects (leaves, animals) that might damage the instruments inside it or bias readings from them. Stevenson screens are used for weather reporting worldwide, and contain instruments like:

Stevenson Screens are a good example of the care needed to obtain minimally-biased sensor readings.  The box and its positioning is standardized by the World Meteorological Organisation, to minimize instrument bias, e.g. all boxes are mounted 1.25m high to minimize ground temperature effects, louvred to minimize the effects of still air (e.g. overheating), and with doors opening North, to minimize reading errors from direct sunlight.

But Electronics can be Convenient

Each of the instruments above has an electronic equivalent, e.g. a sensor that can provide data remotely without a person needing to visit the Stevenson screen.  Digital sensors (sensors that convert physical quantities into electrical signals) are more recent, including digital cameras (Sasson 1975) and other sensors whose outputs can be sent to electronic storage, over wifi links or directly to computer processors.  We’ll talk more about these later.

Who Owns Crisis Data?

[Cross-posted from OpenCrisis.org]

N.B. After investigating the compaint in detail (see this blogpost), we re-opened the resource doc containing the dataset discussed below later on 09 Feb.

This is one of several posts going up as a result of OpenCrisis investigations into a complaint about one of its datasets.  This post will concentrate on the ownership aspects of that complaint.

Please note that this is currently a ‘holding’ blogpost, so people affected can be updated on what’s happening (sadly, all work on this much-used dataset has ceased until we get this all straightened out) – we’ll be adding to it as we have time (we’re concentrating more on making sure the dataset isn’t breaking its stated objective of “first, do no harm”) and as we learn more about this important issue.

Some background:  starting late last year, OpenCrisis began assembling a dataset (South Sudan humanitarian data, geodatasets, Twitter, etc) for its own work, that it then shared with another group (lets call them “LovelyPeople’) that was collecting information for a journalism group (let’s call them ‘Z’) – this second collection was deemed sensitive by ‘Z’, so we carefully left their deployment off our list of groups working on this crisis (for the record, we do this a lot, and will continue to do it for anyone who asks us to).  OpenCrisis made some of its original data (anything considered sensitive was kept private) available to people working in South Sudan via a publicly-visible but not widely-advertised spreadsheet.

Unfortunately,  5 Twitter names  made their way  from ‘Z’s sheet into the new spreadsheet, and ‘Z’ or one of its representatives (it’s unclear which) is now threatening to sue two individuals in OpenCrisis for the reuse of ‘their’ data.  To put this in perspective, the contribution from OpenCrisis to ‘Z’ was roughly 20+ Facebook addresses, 20 blog addresses, 60 multimedia records,  virtually all the local media outlets cited by ‘Z’, virtually all the Twitter lists listed by ‘Z’, 50-70 Twitter names and a direct copy (credited) of the OpenCrisis crisis mapping page as it was at the time.  This is all made more confusing because a third group, LovelyPeople, were also involved, and the OpenCrisis member concerned (Brendan O’Hanrahan) believed that the work with LovelyPeople was on the basis of mutual benefit, because that was stated when he joined the project.

Just to be clear on the OpenCrisis position on this dataset: ‘Z’s specific problem appears to be with the list of Twitter users. There are many many Twitter lists containing the data in question now – so much so that OpenCrisis stopped updating their spreadsheet list back in January – and we have no problems with removing any content that we can’t prove is our own. 

But we don’t want to live in a world where data ownership and worrying about being sued is a concern for every mapper trying to improve the world.  We might get sued, but this isn’t about us.  The much more important thing is resolving (or starting to resolve) the issue of data ownership when that data has been generated collectively by multiple individuals and groups.

So, who owns crisis data?

The heart of this problem is ownership of community-generated data.  I have much reading and thinking to do before I can start to answer this question, but the use of agreements (even if it’s agreement that all data will be shared across the community) appears to be key.

The legal position in the US appears to be clear: “It is important to remember that even if a database or compilation is arranged with sufficient originality to qualify for copyright protection, the facts and data within that database are still in the public domain. Anyone can take those facts and reuse or republish them, as long as that person arranges them in a new way” (Uni of Michigan’s exceptions to copyrights page).  That’s actually a huge relief, because if verified (and IANAL), the constant work that we all do on existing crisis datasets will help us to keep them free to use.

So the issue now appears to be less of a legal one, and more of a moral and ethical one: when is it right to share data between groups, and when is it right to claim ownership?

Data Licences

Although ‘ownership’ of data for good is anathema to us, there is one reason why it can be good: reducing confusion about who can use what where, via licensing.  We often need to say that the data we produce can be used by anyone, and say it legally and publically, and that’s what open data licences do.  Fortunately, there are some good “you can go use this” licences out there (e.g. ODbL), but as OSM et al know from painful experience, picking the right data licence to be compatible with other people’s data gathering and use can be hard.

Privacy

The privacy of individuals is extremely important in our community.  When we were working through another issue raised by ‘Z’, we considered locking down the spreadsheet to subscribers only – only to realise that that would mean making a list of people (and emails of people) engaged in this work.  Which we’d also have to protect.  We’re still thinking about that one.

Legal protection

We can’t stress this enough: if you’re running a crisis data group, then seriously consider creating a nonprofit company for it. We hate having hierarchies and official registrations too, but without the protection of being an NJ non-profit, those two individuals (and all that they and their family own) would be at risk instead of this being an organisation-to-organisation thing. We’ve started an OpenCrisis page, who owns crisis data, for links and discussion on the ownership issue.  We’d love contributions of useful links and analysis for it.

Crisismapping Meetups Jan-Feb 2014

[Cross-posted from OpenCrisis.org]

This weekend is going to be a busy one for in-person crisismapping events: Digital Humanitarian Training is launching its first meetup in New York, and the Digital Humanitarian Network is running its first in-person meeting in Boston USA (they’re both on our shiny new crisismapping calendar).

As someone who dedicated years to helping crisiscamps around the world and the CrisismappersNYC meetup (spawned from the CrisisCampNY meetups), this makes me both nostalgic and hopeful at the same time.

I’m nostalgic because even the most collaborative groups like CrisisCamp London & Crisismappers NYC are difficult to keep going from a distance (e.g. if you find yourself working 3500 miles from London or even 50 from NYC). Though distance may be short on the map, no amount of tech can fit the enormous gap of quality in meeting-people time. Keeping people engaged in training on crisis mapping, connecting them to other mappers in different cities and handling logistics is a lot for any one person to shoulder. Indeed, the planning, staffing & training work required at an event speak nothing of the ground work involved in identifying venues or maintaining networks and individual connections.

And I’m hopeful to see the next generation of crisismapper meetup organisers come through.  They’ll learn, like we did, about the things that do and don’t work, and hopefully will find some of the things we left behind for them, like the Crisiscamp-in-a-box packs describing everything from what stationery is good to have (post-it notes are always useful) to how to organise training (backstory: Crisiscamp London had a real cardboard box that they stored all their stuff in between meetings).   But hopefully, unlike many of us old ‘uns, they won’t burn out trying to train and map and organise meets all at the same time.

I wish you both luck, Andy and Willow – and if you ever want to drink a pint and talk about all the things that did and didn’t work in the past, I’ll see you sometime in New York!

Sara.