Sunday, April 20, 2014

Experimental Legal Education and Law Without Walls


(thanks to wikipedia for this)

"Success of law schools depends on rate of experimentation", said George Kembel, co-founder of Stanford D School. With the dire state of legal education in the US right now there is an awful imminence to that prediction. Add to the mix that legal education is undergoing all sorts of changes around the world and we have a situation about which it is impossible to be complacent.

I have just returned from Miami where we celebrated the fourth ConPosium of Law Without Walls. The growth in LWOW is huge--in 4 years we've gone from 6 law schools to 26 law and business schools around the world, in every continent. Yet this year is special because we experimented with a new programme called LWOWX. LWOWX existed in virtual reality only. If our pilot works, LWOWX will be a way of making LWOW accessible to a greater community.

What is special about LWOW and LWOWX is that students get to experiment in ways they won't find anywhere in law school. Students have to mix law, business, finance, technology and design in creative ways that provide answers to problems. Of course LWOW is more than that. Students are placed in multicultural teams that span 19 time zones with mentors who are busy and all over the place. Now coordinate your meetings, work out which language and assign different bits of the tasks. Difficult? You bet. All you've got is four months to do it.

The way it works is that the students are given broad topics. Here are some examples:
  • The Death of the Cover Letter: Rethinking How to Find a Job and Build a Career
  • Cyber Justice: Using Technology to Provide Legal Services to Underserved Around the Globe
  • International Arbitration: What’s Under the Invisibility Cloak? 
  • Women in the Law: Is the Glass Ceiling Cracked, Smashed, or Unbreakable? 
Within these topics students had to design projects to solve a particular problem. The women in the law team developed a smartphone app for female lawyers to create an online community where their problems and difficulties could be aired and discussed. (Click link to see presentation.) The death of the cover letter group created
"JD Handshake -- A website for law students looking for jobs, and for employers seeking to hire law school students and graduates that allows employers to get to know candidates better than they can via the traditional resume and cover letter and interview process."
One of the key points about these projects is that they must have a business case behind them. This doesn't mean they have to be for profit ventures, there are plenty of not for profit projects. Either way they have to be feasible and sustainable.

Let me give two examples from this year. "Nirubi" is the project that won the LWOWX competition this year. It is based on providing help for women in the Sri Lanka civil war who feel they have no means of expressing their voice and feelings and so are powerless. Nirubi is designed to collect voices and to work with NGOs.

"Judgment Pay" was a website designed to use crowdsourcing to help poor people collect their judgment debts--that bit of the legal process we tend to forget, actually getting hold of your damages from the defendant.

Just Innovate tweeted:

Just Innovate @Just_Innovate
Team Judgment Pay: Business Need? [tick], Business Model? [tick], Competitive Advantage? [tick].... Judges impressed so far at
Judgment Pay won this year's ConPosium. And by the way the ConPosium tweets under the hashtag #lwow2014 were "storified" by Robert Richards at storify.com with lots of photographs of the teams and judges.

There will be many ways that LWOW will grow and extend, not just in school numbers but in features and roles. What is clear to all of us who attended the ConPosium is that for law schools to retain meaning and relevance in modern society they must go beyond their traditional remits. We can no longer rely on the conventional wisdom of legal education nor can we continue to mystify our students with the process.

I imagine because of our reliance on precedent we look to the past while we mildly attempt to predict the future. LWOW shows us how to be radical and fulfilling. It is a way of introducing experimentation and giving students (and faculty) good reasons for showing that legal education is worthwhile, fruitful and creative. We can start to see law students and lawyers as designers and innovators in a legal services market that is moving forward despite what we do in law schools.

It's worth remembering that within the eight regulatory objectives of the Legal Services Act 2007 are improving access to justice, improving the public understanding of law, and promoting competition within the provision of legal services. LWOW is showing us a way of achieving these objectives.

I return to Kembel's words: "Success of law schools depends on rate of experimentation". Yes.



Share/Bookmark

Tuesday, April 01, 2014

Are Machines Ethical?

(thanks to Rockwell Center)

In an episode of Jonathan Creek (don't bother to find it, it's not good) a woman spilled her mother's ashes on the floor. She went to get a vacuum cleaner from another house but on her return she found the ashes had been stolen. Of course, they hadn't. Her mother had a robot vacuum machine that came out at intervals and sucked up the ashes. And no, you can't ask if this machine was ethical because it wouldn't make sense.

There have been a number of articles recently about machine-based activities in the legal sphere--document assembly, e-discovery and case analysis. This follows from things like Google's driverless car which by 2012 had achieved 300,000 accident-free miles. The use of High Frequency Trading in stock trading (see Michael Lewis, Flash Boys: A Wall Street Revolt). And machine-controlled laser surgery for eye correction. It's clear this is a growing trend, possibly exponential.

Despite whether or not we are approaching the point of singularity (arguments both ways), huge resources are being put into the mechanisation of law. In part it is because machines, robots, algorithms can do repetitive tasks more efficiently than humans, and also because machines tend to be cheaper than humans. From a Marxist perspective it makes sense to move to machines from labour. The returns to capital are much greater.

To approach my question in the title, ethics are concerned with good, proper behaviour that accords with standards and principles that a profession abides by. They are also concerned with things that go wrong: mistakes, malfeasance, mischief.

Paul Virilio, the French philosopher, articulated the essential paradox of technology--that to invent something is to invent its negative. Invent ships and you invent shipwrecks, invent railways and you create derailment, and create the car and you invent the pile-up. Every advance in technology and machines creates its negative form. It is never a matter of if but only when. Modern society operates so quickly that the vital variable is speed.

Glitches in software and algorithms occur and have worldwide effects--for example, the collapse of the commodities and stock markets in 1987. Program trading went out of control resulting in Black Monday. Even allowing for unintended consequences, we have to build in rules for machines to decide what actions to take when faced with catastrophic choices.

Tom Chatfield puts the trolley problem at the centre of the issue. A tram runs out of control and the driver sees that he is about to hit five men working on the track. However, he can turn onto a siding but in doing so he will kill a single man. Without delving into the deep void of the trolley problem, and its variants (the fat man), I suggest we need to start thinking about this in the legal sphere as machines and algorithms become more common, especially in the face of legal aid cuts and the like. (For further information on the trolley problem et al go to Experimental Philosophy.)

Given that automation is rising, given that computer-based legal services are increasing, how are we going to program machines for errors? Ultimately who will be responsible for those errors? Chatfield refers to two modes: automatic and manual. Humans are capable of both. We can adjust our behaviours to the moment, almost automatically, but we are also capable of thinking out the longer term consequences of our actions in manual mode. We bring heads and hearts together.

Algorithms don't do that. They are usually designed to maximise the effects of certain conditions. If I'm in a driverless car that by some accident is about to plough into a group of people, it could decide that veering off and killing me is the preferable outcome. I would disagree, of course.

Some might argue that the algorithm's decision is ethically superior to my wants. But it is not thinking that way; it has a utilitarian viewpoint, to my cost. In a way the algorithm is superior because it isn't letting sentimentality intrude. Some artificial intelligence experts have argued that there is nothing wrong here as long as the programming is transparent and we can all understand what the consequences will be. We take our risk here.

What is more likely, however, is that we will outsource more activities to machines believing we've overcome the difficulties without actually investigating this. Chatfield says
As agency passes out of the hands of individual human beings, in the name of various efficiencies, the losses outside these boxes don’t simply evaporate into non-existence. If our destiny is a new kind of existential insulation – a world in which machine gatekeepers render certain harms impossible and certain goods automatic – this won’t be because we will have triumphed over history and time, but because we will have delegated engagement to something beyond ourselves.
We know the consequence of this kind of delegation. We see it in the privatisation of prisons, health, and, more dangerously, in security.

As more areas of law come within the sphere of algorithms and machines, we will need to carefully consider the ethical problems that will inevitably arise. Accidents will happen and people's livelihoods, liberty, property might all well be at stake. How easy will it be to correct mistakes in online divorce with children, property, and pensions and the like? Who or what will be culpable? How will errors be discovered? Who will have the authority to declare errors? Or will we subscribe to a utilitarian ethos that it must be for the greater good, so we should just lump it?

We don't have to wait for the point of singularity to start working these out.



Share/Bookmark