Liip Blog Kirby Mon, 19 Mar 2018 00:00:00 +0100 Latest articles from the Liip Blog en Addressing the gender gap with Django Girls Mon, 19 Mar 2018 00:00:00 +0100 <p>I recently attended a Django Girls workshop in Lausanne, along with my colleagues Stéphanie and Raphaël. Django Girls is a non-profit organization that aims to &quot;inspire women to fall in love with programming&quot;, and to do so they organize free programming workshops all around the world. These are usually held in the form of a weekend where participants start from scratch and learn how to create their own website with Django by following a tutorial, while having coaches available to answer inevitable questions. One of the particularities of these workshops is that registrations are restricted to women. If you're wondering why, let me tell you...</p> <h3>A short story</h3> <p>I graduated from the ETML (École Technique des Métiers de Lausanne) in 2005, after studying computer science for 3 years. The picture of the graduation shows 26 smiling people. Among them, 0 women. Zero. And that's not because women in my class failed: they couldn't since there just were no women in the class. Not a single one.</p> <p>Fast forward 4 years, in 2009: new graduation, this time for a bachelor in computer science. The picture of the graduation this time shows 20 smiling people who just got their diploma. Among them, not a single woman.</p> <p>Fast forward to 2018: I've been in the IT for the last 10 years, creating websites and applications, sometimes with a team, sometimes on my own. In the last 10 years, I can count the number of women I worked with and who were developers on the fingers of one single hand. And I guess this number comes with no surprise, considering the amount of women graduating from IT schools.</p> <h3>Bringing more diversity</h3> <p>The gender gap in the STEM (science, technology, engineering, and mathematics) fields is a reality, and, in this binary society where everything still has &quot;for boys&quot; and &quot;for girls&quot; labels, Django Girls events are a very good opportunity for attendees to get to know a domain that is still wrongly perceived by many as &quot;for boys&quot;. I hope that, out of the dozen attendees, this workshop has inspired some of them to pursue a career in the IT, or at least shown them that programming is fun and is not a &quot;for boys&quot; thing.</p> <img src="" alt="Attendees coding"> <p>This particular Django Girls workshop was organized by people from the department of computational biology from the UNIL. It was the first time I attended such a workshop with coaches being 100% women (Raphaël and I were there only as &quot;backup coaches&quot;, bringing our expertise if needed), and I realized that's important for the image you give to the attendees: it helps breaking the image of a domain that is restricted to men and shows that expertise is not related to gender.</p> <p>Coaching is also a very good opportunity to learn from others: most of the questions raised by the attendees make you think about why things have to be done a certain way, which can sometimes be complicated or illogical, and you can't just answer &quot;because that's the way it is&quot;. It forces you to think about why we do things the way we do, and understand them better.</p> <p>It was the third time I attended a Django Girls workshop as a coach and I will never stop being impressed by the patience and the ability to learn of the attendees (they basically have to learn <em>everything</em> in one single day). Nothing beats the feeling of seeing the spark in the eyes of an attendee understanding a concept and exclamating &quot;aaaah yes, I got it&quot;!</p> <p>At the end of the day you can sense how tired everyone is, but also happy to have taken part in such an experience, learned new things, and maybe started making new plans for the future.</p> <h3>Now what?</h3> <p>I hope this summary made you want to get involved in Django Girls, either by <a href="">participating as an attendee or a coach</a>, by <a href="">organizing your own Django Girls workshop</a> by <a href="">supporting them</a>, or by <a href="">following the tutorial to learn how to create websites</a>.</p> <p>I would also like to take this opportunity to thank once again the organizers for making such an event happen. I can't wait for the next one!</p><img src="" height="1" width="1" alt=""/> Speech recognition with Tue, 13 Mar 2018 00:00:00 +0100 <p>Speech recognition is here to stay. Google Home, Amazon Alexa/Dot or the Apple Homepod devices are storming our living rooms. Speech recognition in terms of assistants on mobile phones such as Siri or Google home has reached a point where they actually become reasonably useful. So we might ask ourselves, can we put this technology to other uses than asking Alexa to put beer on the shopping list, or Microsoft Cortana for directions. For creating your own piece of software with speech recognition, actually not much is needed, so lets get started! </p> <h2>Overview</h2> <p>If you want to have your own speech recognition, there are three options: </p> <ol> <li>You can either hack Alexa to do things but you might be limited in possibilities</li> <li>You can use one of the integrated solutions such as <a href="">Rebox</a> that allows you more flexibility and has a microphone array and speech recognition built in.</li> <li>Or you use a simple raspberry pi or your laptop only. That’s the option I am going to talk about in this article. Oh btw <a href="">here</a> is a blog post from Pascal - another Liiper - showing how to do asr in the browser. </li> </ol> <h2>Speech Recognition (ASR) as Opensource</h2> <p>If you want to build your own device you make either use of excellent open source projects like <a href="">CMU Sphinx</a>, <a href="">Mycroft</a>, <a href="">CNTK</a>, <a href="">kaldi</a>, <a href="">Mozilla DeepSpeech</a> or <a href="">KeenASR</a> which can be deployed locally, often work already on a Raspberry Pi and often and have the benefit, that no data has to be sent through the Internet in order to recognize what you’ve just said. So there is no lag between saying something and the reaction of your device (We’ll cover this issue later). The drawback might be the quality of the speech recognition and the ease of its use. You might be wondering why it is hard to get speech recognition right? Well the short answer is data. The longer answer follows:</p> <h3>In a nutshell - How does speech recognition works?</h3> <p>Normally (<a href="">original paper here</a>) the idea is that you have a <a href="">recurrent neural network</a>(RNN). A RNN is a deep learning network where the current state influences the next state. Now you feed in 20-40 ms slices of audio that have been formerly transformed into a <a href="">spectrogram</a> as input into the RNN. </p> <img src="" alt="A spectrogram"> <p>An RNN is useful for language tasks in particular because each letter influences the likelihood of the next. So when you say &quot;speech&quot; for example,the chances to say “ch” after you’ve said &quot;spee&quot; is quite high (&quot;speed&quot; might be an alternative too). So each 20ms slice is transformed into a letter and we might end up with a letter sequence like this: &quot;sss_peeeech&quot; where “” means nothing was recognized. After removing the blanks and combining the same letters into one we might end up with the word &quot;speech&quot;, if we’re lucky and among other candidates like &quot;spech&quot;, &quot;spich&quot;, &quot;sbitsch&quot;, etc... Because the word speech appears more often in written text we’ll go for that. </p> <img src="" alt="A RNN for speech recognition"> <p>Where is the problem now? Well the problem is, you as a private person will not have millions of speech samples, which are needed to train the neural network. On the other hand everything you say to your phone, is collected by e.g. Alexa and used as training examples. You are not believing me? Here is all you have ever said <a href=" speech">samples</a> to your Android phone. So what options are you having? Well you can still use one of the open source libraries, that already come with a pre-trained model. But often these models have been only trained for the english language. If you want to make them work for German or even Swiss-German you’d have to train them yourself. If you just want to get started you could use a speech recognition as a service provider. </p> <h2>Speech Recognition as a Service</h2> <p>If you feel like using a speech recognition service it might surprise you most startups in this area have been bought up by the giants. Google has bought startup <a href=""></a> and Facebook has bought another startup working in this field: <a href=""></a>. Of course other big 5 companies are having their own speech services too. Microsoft has <a href="">cognitive services in azure</a> and IBM has speech recognition built into <a href="">Watson</a>. Feel free to choose one for yourself. From my experience their performance is quite similar. In this example I went with</p> <h2>Speech recognition with</h2> <p>For a fun little project &quot;Heidi - the smart radio&quot; at the <a href="">SRF Hackathon</a> (btw. Heidi scored 9th out of 30 :) I decided to build a smart little radio, that basically listens to what you are saying. You just tell the radio to play the station you want to hear and then it plays it. That’s about it. So all you need is a microphone and a speaker to build a prototype. So let’s get started.</p> <h2>Get the audio</h2> <p>First you will have to get the audio from your microphone, which can be done with python and <a href="">pyaudio</a> quite nicely. The idea here is that you’ll create a never ending loop which always records 4 seconds of your speech and saves it to a file after. In order to send the data to, it reads from the file backa and sends it as a post request to Btw. we will do the recording in mono. </p> <pre><code class="language-python"> import pyaudio import wave def record_audio(RECORD_SECONDS, WAVE_OUTPUT_FILENAME): #--------- SETTING PARAMS FOR OUR AUDIO FILE ------------# FORMAT = pyaudio.paInt16 # format of wave CHANNELS = 1 # no. of audio channels RATE = 44100 # frame rate CHUNK = 1024 # frames per audio sample #--------------------------------------------------------# # creating PyAudio object audio = pyaudio.PyAudio() # open a new stream for microphone # It creates a PortAudio Stream Wrapper class object stream =,channels=CHANNELS, rate=RATE, input=True, frames_per_buffer=CHUNK) #----------------- start of recording -------------------# print("Listening...") # list to save all audio frames frames = [] for i in range(int(RATE / CHUNK * RECORD_SECONDS)): # read audio stream from microphone data = # append audio data to frames list frames.append(data) #------------------ end of recording --------------------# print("Finished recording.") stream.stop_stream() # stop the stream object stream.close() # close the stream object audio.terminate() # terminate PortAudio #------------------ saving audio ------------------------# # create wave file object waveFile =, 'wb') # settings for wave file object waveFile.setnchannels(CHANNELS) waveFile.setsampwidth(audio.get_sample_size(FORMAT)) waveFile.setframerate(RATE) waveFile.writeframes(b''.join(frames)) # closing the wave file object waveFile.close() def read_audio(WAVE_FILENAME): # function to read audio(wav) file with open(WAVE_FILENAME, 'rb') as f: audio = return audio def RecognizeSpeech(AUDIO_FILENAME, num_seconds = 5): # record audio of specified length in specified audio file record_audio(num_seconds, AUDIO_FILENAME) # reading audio audio = read_audio(AUDIO_FILENAME) # WIT.AI HERE # .... if __name__ == "__main__": while True: text = RecognizeSpeech('myspeech.wav', 4)</code></pre> <p>Ok now you should have a myspeech.wav file in your folder that gets replaced with the newest recording every 4 seconds. We need to send it to to find out what we've actually said. </p> <h2>Transform it into text</h2> <p>There is an extensive <a href="">extensive documentation</a> to I will use the <a href="">HTTP API</a>, which you can simply use with curl to try things out. To help you out in the start, I thought I'd write the file to show some of its capabilities. Generally all you need is an access_token from that you send in the headers and the data that you want to be transformed into text. You will receive a text representation of it. </p> <pre><code class="language-python"> import requests import json def read_audio(WAVE_FILENAME): # function to read audio(wav) file with open(WAVE_FILENAME, 'rb') as f: audio = return audio API_ENDPOINT = '' ACCESS_TOKEN = 'XXXXXXXXXXXXXXX' # get a sample of the audio that we recorded before. audio = read_audio("myspeech.wav") # defining headers for HTTP request headers = {'authorization': 'Bearer ' + ACCESS_TOKEN, 'Content-Type': 'audio/wav'} #Send the request as post request and the audio as data resp =, headers = headers, data = audio) #Get the text data = json.loads(resp.content) print(data)</code></pre> <p>So after recording something into your &quot;.wav&quot; file, you can send it off to and receive an answer:</p> <pre><code class="language-bash">python {u'entities': {}, u'msg_id': u'0vqgXgfW8mka9y4fi', u'_text': u'Hallo Internet'}</code></pre> <h2>Understanding the intent</h2> <p>Nice it understood my gibberish! So now, the only thing left is to understand the <strong><em>intent</em></strong> of what we actually want. For this has created an interface to figure out what the text was about. Different providers <a href="">differ</a> quite a bit on how to model intent, but for it is nothing more than fiddling around with the gui. </p> <img src="" alt="Teaching our patterns"> <p>As you see in the screenshot wit has a couple of predefined entity types, such as: age_of_person, amount_of_money, datetime, duration, email, etc.. What you basically do is, to mark the word you are particularly interested about, using your mouse, for example the radio-station &quot;srf1&quot; and assign it to a matching entity type. If you can't find one you can simply create one such as &quot;radiostation&quot; . Now you can use the textbox to enter some examples and formulations and mark the entity to &quot;train&quot; wit to recognize your entity in different contexts. It works to a certain extent, but don't expect too much of it. If you are happy with the results, you can use the API to try it.</p> <pre><code class="language-python"> import requests import json API_ENDPOINT = '' ACCESS_TOKEN = 'XXXXXXXXXXXXXXX' headers = {'authorization': 'Bearer ' + ACCESS_TOKEN} # Send the text text = "Heidi spiel srf1." resp = requests.get(';q=(%s)' % text, headers = headers) #Get the text data = json.loads(resp.content) print(data)</code></pre> <p>So when you run it you might get:</p> <pre><code class="language-bash">python {u'entities': {u'radiostation': [{u'confidence': 1, u'type': u'value', u'value': u'srf1'}]}, u'msg_id': u'0CPCCSKNcZy42SsPt', u'_text': u'(Heidi spiel srf1.)'}</code></pre> <h2>Obey</h2> <p>Nice it understood our radio station! Well there is not really much left to do, other than just play it. I've used a hacky mplayer call to just play something, but sky is the limit here.</p> <pre><code class="language-python">... if radiostation == "srf1" : os.system("mplayer") ...</code></pre> <h2>Conclusion</h2> <p>That was easy wasn't it? Well yes, I omitted one problem, namely that our little smart radio is not very convenient because it feels very &quot;laggish&quot;. It has to listen for 4 seconds first, then transmit the data to wit and wait until wit has recognized it, then find the intent out and finally play the radio station. That takes a while - not really long e.g. 1-2 seconds, but we humans are quite sensitive to such lags. Now if you are saying the voice command in the exact right moment when it is listening, you might be lucky. But otherwise you might end up having to repeat your command multiple times, just to hit the right slot. So what is the solution?</p> <p>The solution comes in the form of a so called “wake word”. It's a keyword that the device listens constantly to and the reason why you have to say &quot;Alexa&quot; first all the time, if you want something from it. Once a device picks up its own “wake word”, it starts to record what you have to say after the keyword and transmits this bit to the cloud for processing and storage. In order to pickup the keyword fast, most of these devices do the automatic speech recognition for the keyword on the device, and send off the data to the cloud afterwards. Some companies, like Google, went even further and put the <a href="">whole ml model</a> on the mobile phone in order to have a faster response rate and as a bonus to work offline too. </p> <h2>What's next?</h2> <p>Although the &quot;magic&quot; behind the scenes of automatic speech recognition systems is quite complicated, it’s easy to use automatic speech recognition as a service. On the other hand the market is already quite saturated with different devices, for quite affordable prices. So there is really not much to win, if you want to create your own device in such a competitive market. Yet it might be interesting to use open source asr solutions in already existing systems, where there is need for confidentiality. I am sure not every user wants his speech data to end up in a google data center when he is using a third party app. </p> <p>On the other hand for the big players offering devices for affordable prices turns out to be a good strategy. Not only are they so collecting more training data - which makes their automatic speech recognition even better - but eventually they are controlling a very private channel to the consumer, namely speech. After all, it’s hard to find an easier way of buying things rather than just <a href="">saying it out loud</a>.</p> <p>For all other applications, it depends what you want to achieve. If you are a media company and want to be present on these devices, that will probably soon replace our old radios, then you should start <a href="">developing</a> so called <a href=";node=10068460031">&quot;skills&quot;</a> for each of these systems. The discussion on the pros and cons of smart speakers is already <a href="">ongoing</a>. </p> <p>For websites this new technology might finally bring an improvement for impaired people as most modern browsers seem more and more to <a href="">support ASR directly</a> the client. So it won't take too long, unless the old paradigm in web development will shift from &quot;mobile first&quot; to &quot;speech first&quot;. We will see what the future holds.</p><img src="" height="1" width="1" alt=""/> Meet Kotlin — or why I will never go back to Java! Fri, 09 Mar 2018 00:00:00 +0100 <p>When Google and JetBrains <a href="">announced</a> first-class support for Kotlin on Android last year, I could not wait to use it on our next project. Java is an OK language, but when you are used to Swift on iOS and C# on Xamarin, it's sometimes hard to go back to the limited Java that Android has to offer.</p> <p>Within this past year, we successfully shipped two applications using Kotlin exclusively, with another one to follow soon. We decided to also use Kotlin for previous Java apps that we keep updating.</p> <p>I took my chance when the <a href="">Mobile Romandie Beer</a> meetup was looking for speakers. I knew that I had to show others how easy and fun this language is. </p> <p>It turned out great. We had people from various backgrounds: from people just curious about it, UX/UI designers, iOS developers, Java developers to people using Kotlin in production already.</p> <p>You can find my slides below:</p> <script async class="speakerdeck-embed" data-id="1504188547254400bb81fd9f30f2e701" data-ratio="1.77777777777778" src="//"></script> <p>I would like to share a few links that helped me learn about Kotlin:</p> <ul> <li><a href="">Kotlin Koans</a>: a step by step tutorial that directly executes your code in the browser</li> <li><a href="">Kotlin and Android</a>: the official Android page to get started with Kotlin on Android</li> <li><a href="">Android KTX</a>: a useful library to help with Android development, released by Google</li> </ul> <p>See you at the <a href="">next meetup</a>!</p><img src="" height="1" width="1" alt=""/> Machine Learning as a Service with firefly Sun, 04 Mar 2018 00:00:00 +0100 <p>So I know there is <a href="">yhat science ops</a> which is a product exactly for this problem, but that solution is a bit pricy and maybe not the right thing if you want to prototype something really quick. There is, of course, the option to use your own servers and wrap your ml model in a thin layer of Flask as I have show in a <a href="">recommender example for Slack</a> before. But now there is an even easier solution using firefly and Heroku, that offers you a possibility to deploy your prototypes basically for free.</p> <h2>Installation</h2> <p>You can easily install firefly with pip </p> <pre><code class="language-bash">pip install firefly-python</code></pre> <p>Once its installed (I've been using python 2.7 - shame on me) you should be able to test it with:</p> <pre><code class="language-bash">firefly -h</code></pre> <h2>Hello World Example</h2> <p>So we could write a simple function that returns the result of two numbers:</p> <pre><code class="language-python"> def add(x,y): return x+y</code></pre> <p>and then run it locally with firefly:</p> <pre><code class="language-bash">firefly example.add 2018-02-28 15:25:36 firefly [INFO] Starting Firefly...</code></pre> <p>The cool thing is now that the function is available at <a href=""></a> and you could use it with curl. Make sure to still run the firefly server from another tab.</p> <pre><code class="language-bash">firefly curl -d '{"x": 4, "y": 5}' 9</code></pre> <p>or even with the built in client:</p> <pre><code class="language-python">import firefly client = firefly.Client("") client.add(x=5,y=5)</code></pre> <h2>Authentication</h2> <p>For any real-world example, you will need to use authentication. This is actually also quite easy with firefly. You simply supply an API token when starting it up:</p> <pre><code class="language-bash">firefly example.add --token plotti1234</code></pre> <p>Using the firefly client you can easily authenticate with:</p> <pre><code class="language-python">client = firefly.Client("",auth_token="plotti1234") client.add(x=5,y=5)</code></pre> <p>If you don't supply it, you will get a:</p> <pre><code>firefly.client.FireflyError: Authorization token mismatch.</code></pre> <p>Of course, you can still use curl to do the same:</p> <pre><code class="language-bash">curl -d '{"x": 6,"y":5}' -H "Authorization: Token plotti1234" 11</code></pre> <h2>Going to production</h2> <h3>Config File</h3> <p>You can also use a config.yml file, to supply all of these parameters</p> <pre><code class="language-yml"># config.yml version: 1.0 token: "plotti1234" functions: square: path: "/add" function: "example.add"</code></pre> <p>and then start firefly with:</p> <pre><code class="language-bash">firefly -c config.yml</code></pre> <h3>Training a model and dumping it onto drive</h3> <p>Now you can train a model and dump it to drive with scikit with joblib. You can easily load it with firefly and serve it under a route. First, let's train a hello world tree model with the iris dataset and dump it to drive:</p> <pre><code class="language-python"># from sklearn import tree from sklearn import datasets from sklearn.externals import joblib #Load dataset iris = datasets.load_iris() X, Y =, #Pick a model clf = tree.DecisionTreeClassifier() clf =, Y) # Try it out X[0:1] array([[5.1, 3.5, 1.4, 0.2]]) clf.predict(X[0:1]) array([0]) # result of classification #Dump it to drive joblib.dump(clf, 'iris.pkl') </code></pre> <p>You can then load this model in firefly as a function and you are done:</p> <pre><code class="language-python"># from sklearn.externals import joblib clf = joblib.load('iris.pkl') def predict(a): predicted = clf.predict(a) # predicted is 4x2 numpy array return int(predicted[0])</code></pre> <p>To start it up you use the conventional method:</p> <pre><code class="language-bash">firefly iris.predict</code></pre> <p>And now, you can access your trained model simply by the client or curl:</p> <pre><code class="language-python">import firefly client = firefly.Client("") client.predict(a=[[5.1, 3.5, 1.4, 0.2]]) # the same values as above 0 # the same result yeay!</code></pre> <h3>Deploy it to Heroku!</h3> <p>To deploy it to heroku you need to add two files. A Procfile that says how to run our app, and a requirements.txt file that says which libraries it will be using. It's quite straightforward for the requirements.txt:</p> <pre><code>#requirements.txt firefly-python sklearn numpy scipy</code></pre> <p>And for the procfile you can use gunicorn to run it and supply the functions that you want to use as environment parameters:</p> <pre><code># Procfile web: gunicorn --preload firefly.main:app -e FIREFLY_FUNCTIONS="iris.predict" -e FIREFLY_TOKEN="plotti1234"</code></pre> <p>The only thing left to do is commit it to git and deploy it to heroku:</p> <pre><code class="language-bash">git init git add . git commit -m "init" heroku login # to login into your heroku account. heroku create # to create the app</code></pre> <p>The final step is the deployment which is done via git push in heroku :</p> <pre><code class="language-bash">git push heroku master Counting objects: 3, done. Delta compression using up to 8 threads. Compressing objects: 100% (3/3), done. Writing objects: 100% (3/3), 279 bytes | 279.00 KiB/s, done. Total 3 (delta 2), reused 0 (delta 0) remote: Compressing source files... done. remote: Building source: remote: remote: -----&gt; Python app detected remote: -----&gt; Installing requirements with pip remote: remote: -----&gt; Discovering process types remote: Procfile declares types -&gt; web remote: remote: -----&gt; Compressing... remote: Done: 119.6M remote: -----&gt; Launching... remote: Released v7 remote: deployed to Heroku remote: remote: Verifying deploy... done. To 985a4c3..40726ee master -&gt; master</code></pre> <h3>Test it</h3> <p>Now you've got a running machine learning model on heroku for free! You can try it out via curl. Notice I've wrapped the array as a string representation to make things easy. </p> <pre><code> curl -d '{"a":"[[5.1, 3.5, 1.4, 0.2]]"}' -H "Authorization: Token plotti1234" 0%</code></pre> <p>You can of course also use the firefly client:</p> <pre><code class="language-python">client = firefly.Client("",auth_token="plotti1234") client.predict(a=[[5.1, 3.5, 1.4, 0.2]])</code></pre> <h3>Bonus: Multithreading and Documentation</h3> <p>Since we are using gunicorn you can easily start 4 workers and your API should respond better to a high load. Change your Procfile to:</p> <pre><code>web: gunicorn --workers 4 firefly.main:app -e FIREFLY_FUNCTIONS="iris.predict" -e FIREFLY_TOKEN="plotti1234"</code></pre> <p>Finally there is only <a href="">crude</a> support for an apidoc style documentation. But when you do a GET request to your root / of your app you will get a listing of the docstrings from your code. So hopefully in the future they will also support apidoc or swagger to make the usage of such an API even more convenient: </p> <pre><code>curl -H "Authorization: Token plotti1234" {"app": "firefly", "version": "0.1.11", "functions": {"predict": {"path": "/predict", "doc": "\n @api {post} /predict\n @apiGroup Predict\n @apiName PredictClass\n\n @apiDescription This function predicts the class of iris.\n @apiSampleRequest /predict\n ", "parameters": [{"name": "a", "kind": "POSITIONAL_OR_KEYWORD"}]}}}</code></pre> <p>I highly recommend this still young project, because it really reduces deploying a new model to a git push heroku master for me for prototypes. There are obviously some things missing like extensive logging, performance benchmarking , various methods of authentication, better support for docs. Yet its so much fun to deploy models in such a convenient way.</p><img src="" height="1" width="1" alt=""/> One for all, all for one Fri, 02 Mar 2018 00:00:00 +0100 <p><strong> Why a blog?</strong><br /> The cooperation between Raffeisen and Liip has developed and deepened over the years. From the first steps in agility to a joint Scrum team working in the same office, in which affiliation with the company does not play a role, has emerged. That’s why we are sharing our experience in a blog series about collaboration and cooperation.</p> <p><strong>Raiffeisen</strong><br /> Approximately 255 Raiffeisen banks are currently members of the Raiffeisen Switzerland cooperative. Raiffeisen Switzerland provides services for the entire Raiffeisen Group and is responsible for the strategic orientation of the business areas of the Raiffeisen banks as well as for risk management, marketing, information technology, training, supply and management of banks with liquidity. Raiffeisen Schweiz also conducts its own banking business through branches.</p> <p><strong>MemberPlus</strong><br /> As Raiffeisen's customer loyalty platform, MemberPlus has grown over several years and now offers a wide range of services to the Raiffeisen customers. In addition to discounts for event tickets, there are special conditions for private customers for hotel accommodations and many more. Corporate customers also have access to special sponsoring deals from the Raiffeisen Super League. Raiffeisen Music is the app for young people to listen to their favourite songs, visit concerts for less money and with a bit of luck meeting their stars.</p> <p><strong>The project: MemberPlus Portal 2.0</strong><br /> In 2018, a relaunch of the MemberPlus platform realizes the following vision:&quot;We offer everyone an overview of offers for members. These offers can be booked fast and easily by MemberPlus customers&quot;. In addition to this clear user centricity, the second focus is on technical innovation: an upgrade to Magento 2 and a completely new user interface with Angular.</p> <p><strong>To be continued </strong><br /> Curious about the next Blogpost? We’ll publishe the next article ends of March on the topic of project setup.</p><img src="" height="1" width="1" alt=""/> Laura Kalbag – Accessibility for Everyone Wed, 21 Feb 2018 00:00:00 +0100 <h2>How does it feel to navigate the web with an impairment?</h2> <p>Imagine you can’t see and you listen to a screen reader, what does it say? What is wrong with a screen reader? It reads titles filed with SEO keywords (generalities and nothing specific), then it goes ‘link-webcontent-banner-link-link-webcontent-image…’ You get the idea.<br /> Imagine you can’t listen and you see a video without subtitle. What can you understand?<br /> Imagine you have fine motion impairment, how can you click on a tiny link ‘here’?<br /> Imagine it is you first time on the web, you don’t know the conventions and you don't know how to fill a contact formular (what does the asterisk mean?)</p> <p>Laura started her talk with a demonstration that provided examples of difficulty that are commonly faced. </p> <h3><strong>There are 4 ways in which a page can be difficult:</strong></h3> <ul> <li>The page is hard to operate,</li> <li>The page is hard to understand,</li> <li>The page is not readable,</li> <li>The page is not listenable.</li> </ul> <h2>It is not about other people</h2> <p>Are you able bodied? Do you ever feel concerned about such issues? If the the answer is yes and then no, despite the fact that you might lack empathy, you are short-sighted.<br /> There is about 100% of chance that, in the future, you will lose some of your abilities. We grow older, how easy is it for your grandparents to navigate the web? How easy is it for kids in comparison? Don’t fool yourself, you will be the grandparent.<br /> Even temporarily, with a broken arm, a broken leg, an illness or accident, we will be, at some point, impaired. Actually, you could refer to yourself as a TAB: ‘temporarily able bodied’.<br /> While we enjoy our condition of TAB, it is the world that we create that is impairing.<br /> Imagine a world created for people who are half our size, how easy would it be to go around a house created with a standard of 1m10 height? It is uncomfortable and you might even harm yourself, it would be like walking around a Middle Age house and hurting your head because you are too tall.</p> <h2>What are accessibility and inclusive design?</h2> <p>Accessibility is a way to go around. For example: you have stairs to the main entrance, but you provide a way around your house with a lift for anyone with wheel (from mothers with child in a buggy to someone living with a wheelchair).<br /> How does it feel to always have to take the back door because you go around with wheels?</p> <p>Inclusive design goes beyond the alternative, it is designing for everyone from the beginning. Obviously disability is diverse, and there is little chances you can accommodate everybody. However, you can make slight changes that will provide a wider range of possibilities. Inclusive design is designing so taht everyone can take the front door.</p> <h3>You can :</h3> <ul> <li>Make it easy to see,</li> <li>Easy to hear,</li> <li>Easy to operate,</li> <li>Easy to understand.</li> </ul> <p><em>“We need to design for purpose. Accessibility is not binary: it is our eternal goal. Iteration is our power : digital is not printed. ”</em> says Laura. </p> <h2>Practical actions for copywriting</h2> <p>I really liked that Laura provided us with a wide range of practical advices to create inclusive design. The list below is not exhaustive, it is just what caught my ear. </p> <ul> <li>Give your content a clear hierarchy and clear structure,</li> <li>Don’t be an attention thief,</li> <li>Use plain and simple language and explain the content,</li> <li>Give your content order,</li> <li>Use headings to segment and label: headings are not just a visual feature, in plain text, use hierarchy such as <h1></li> <li>Prefer descriptive linking, such as <em>Contact us</em> rather than 'Click <em>here</em> to contact us',</li> <li>Use ponctuation, like com and full stop → it gives the screen reader a break,</li> <li>Add transcript (they are useful also to people who want to just scan the text),</li> <li>Use captions and subtitles for video (captions include all of the audio information, example: a bit of audio). Producing captions and subtitles is easy with jubler. Another way of getting quick subtitles is to reuse and edite the auto-caption.</li> </ul> <h3>Alternative content</h3> <p>For people who can’t access your primary content (because of a low connection or sight disability), provide a text alternative (alternative attribute). It gives the browser a way to fall back<br /> Write descriptive meaningful alternative text. Rather than ‘Picture of my dog’ be creative and use ‘Picture of the head of my dog resting on my knee, looking very sad while I work with my laptop on my lap.’</p> <h3>Try out and iterate</h3> <p>Social media is a great option to work on your alternative text. For example, you can add descriptions of pictures on Twitter.</p> <h2>Can accessible websites be beautiful?</h2> <p>Laura advises us to consider aesthetics as design, not as decoration because ugly is not accessible anyway. <em>&quot;We are not making art, beauty is a thoughtfully-designed interface.&quot;</em> says Laura.</p> <h3>Practical actions: Aesthetic principles</h3> <ul> <li>Use buttons for buttons and links for links: something happen or take someone somewhere: interfaces should not be confusing, one needs to understand what is the purpose, what they can with it, when to do it,</li> <li>Conventions: don’t be different for the sake of being different, but don’t do it because everybody does it,</li> <li>Ensure the layout order reflects the content keyboard,</li> <li>Width: long lines are difficult to follow, </li> <li>Typography: chose according to readability, suitability and not because it looks cool: heinemann vs. georgia (beautiful serif but confusing if you are new to reading),</li> <li>Small is not tidy, it is just small,</li> <li>Don’t prevent font resizing,</li> <li>Consider the font weights,</li> <li>Consider the line heights,</li> <li>Colour: it should not be the sole mean to convey information (example: use a doted line),</li> <li>Colour contrast,</li> <li>Don’t decide what is good for other human beings, rather ask them. </li> </ul> <p><em>&quot;Our industry isn’t so diverse: we don’t all have the same needs but we mostly build product for ourselves. We need to understand and care.&quot; </em>advocates Laura.</p> <h2>Diversify</h2> <p>It is beneficial to work within a diverse team. Empathy is easier because you embrace differences. When needs are different within a team, it makes it more difficult to completely avoid difference. When you understand problems, you are better at solving problems.<br /> A diverse team also prevents us from ‘othering’: let’s not speak about the ‘other’ people.<br /> Laura proposes to go a step further: what if we speak about a person rather than a user? Then it is not user experience design, just experience design.</p> <p>We can also diversify our source material.<br /> <em>“Don’t shut people out. It impacts people’s lives. We build the new everyday thing, we have to take responsibility of what we do.” </em>advocates Laura.</p> <h2>Everyday actions you can take</h2> <p>If you are not a designer or a copywriter, or if you feel that you are not in a position to decide, you can still make a difference: </p> <ul> <li>Be the advisor: provide info and trainings,</li> <li>Be the advocate: if you are not marginalised you have more power,</li> <li>Be the questioner,</li> <li>Be the gatekeeper,</li> <li>Be difficult : embrace the awkwardness of being annoying,</li> <li>Be unprofessional: don’t let people tell you to be quiet or to be nice,</li> <li>Be the supporter: if you can’t risk things, support the people who speak up.</li> </ul> <h2>About Laura Kalbag and IxDA Lausanne</h2> <p><a href="">Laura Kalbag</a> is a designer from the UK, and author of <a href="">Accessibility For Everyone from A Book Apart</a>. </p> <img src="" alt="dsc5069"> <p>Laura works on everything from design and development, through to learning how to run a sustainable social enterprise, whilst trying to make privacy, and broader ethics in technology, accessible to a wide audience. On an average day, you can find Laura making design decisions, writing CSS, nudging icon pixels, or distilling a privacy policy into something humans understand. (Text by IxDA Lausanne).</p> <p><a href="">IxDA Lausanne</a> is your local chapter of the <a href="">Interaction Design Association - IxDA</a>.<br /> The team organises events for Interaction Design enthusiasts.<br /> We are really happy to be the main sponsor of this great event and can’t wait for the next one.<br /> <a href="">Check the programm</a></p><img src="" height="1" width="1" alt=""/> Migros Culture Percentage Web-Relaunch Thu, 15 Feb 2018 00:00:00 +0100 <p><strong>Go Live before Christmas</strong><br /> Shortly before Christmas the new website of Migros Culture Percentage went live.</p> <p><strong>New CMS</strong><br /> The website <a href=""></a> of Migros Culture Percentage's commitment was based on an outdated Content Management System (CMS). This should change by the end of 2017. A migration to the already known CMS Sitecore, which is in use at the Migros Group, was certain. </p> <p><strong>Responsive</strong><br /> The new site should be mobile-capable and meet the current technological standards. The project was used to update the site in all respects. The performance has now been state-of-the-art since the end of 2017. This means that the site is responsive and can be accessed conveniently on all mobile devices and, of course, on the desktop computer at home.</p> <p><strong>Styleguide Generator</strong><br /> A pattern library generator was used as a basis for the development of the front-end (<a href=""></a>). The appearance is therefore based on a technically clean basis. This ensures the front-end of the current appearance and future design developments. The user journeys have also been reworked. Due to its user friendliness, the site is not only pleasing to the eye, but can also be experienced. </p> <p><strong>Cultural and social offer</strong><br /> Migros Culture Percentage gives a broad population access to cultural and social services. The Migros Cooperative Association finances the voluntary commitment in the areas of culture, society, education, leisure and business.</p> <p>The website <a href=""></a> presents the various projects and at the same time makes it possible to apply for funding.</p><img src="" height="1" width="1" alt=""/> Drupal 8: Using the "config" section of composer Tue, 13 Feb 2018 00:00:00 +0100 <p>Composer is the way, we handle our dependancies in Drupal 8. We at Liip use composer for all our Drupal 8 projects, even for a lot of Drupal 7 projects, we switched to composer.</p> <p>We use the composer template available at GitHub:<br /> <a href=""></a></p> <p>Composer has a lot of cool features. There are several plugins, we use in all our Drupal projects like </p> <ul> <li><a href="">Composer Patches</a></li> <li><a href="">Drupal Scaffold</a></li> </ul> <h2>Useful composer config options für Drupal developers</h2> <p>Today, I would like to share some cool features of the &quot;config&quot;-section of composer.json.</p> <p>Let's have a look at the following config-section of my Drupal 8 project:</p> <pre><code class="language-yml"> "config": { "preferred-install": "source", "discard-changes": true, "secure-http": false, "sort-packages": true, "platform": { "php": "7.0.22" } },</code></pre> <h3>Composer config: &quot;preferred-install&quot;: &quot;source&quot;</h3> <p>Have you ever had the need of patching a contrib module? You found a bug and now you should publish a patch file to But how can you create a patch, if the contrib module was downloaded as a zip file via composer and extracted to you contrib folder that is not under source control? </p> <p>&quot;<strong>preferred-install: source</strong>&quot; is your friend! Add this option to your composer.json</p> <ul> <li>delete your dependancy folders and </li> <li>run <code>composer install</code> again</li> </ul> <p>All dependancies will be cloned via git instead of downloaded and extracted. If you need to patch a module or Drupal core, you can create patches easily via git because the depedancy is under version control.</p> <h3>&quot;discard-changes&quot;: true</h3> <p>If you are working with <a href="">Composer Patches</a> and <code>preferred-install: source</code>, you want to enable this option. If you have applied patches, there will be a difference in the source compared to the current git checkout. If you deploy with composer, these messages can block the <code>composer install</code> call. This option will avoid messages like &quot;The package has modified files.&quot; during deployment if you combine it with <code>composer install --no-interaction</code>.</p> <h3>&quot;sort-packages&quot;: true</h3> <p>This option will sort all your packages inside the composer.json in alphabetical order. Very helpful.</p> <h3>Composer config: &quot;platform&quot; (force / fake a specific php version to be used).</h3> <p>Often we face the fact, that our live / deployment server are running on PHP 7.0, but locally you might run PHP 7.1 oder even PHP 7.2. This can be risky, because if you run a &quot;composer update&quot; locally, Composer will assume, that you have PHP 7.1 available and will download or update your vendor dependancies to PHP 7.1. If you deploy later, you will run into mysterious PHP errors, because on your target system, you do not have PHP 7.1 available.<br /> This is the reason, we always fix / force the PHP version in our composer to match the current live system. </p> <h3>Drupal Core update: How can I preserve the &quot;.htaccess&quot; and &quot;; file during the update?</h3> <p>If you do Drupal core updates, these files get always overriden by the default files. On some projects you might have changes these files and its quite annoying to revert the changes on every Core update.</p> <p>Use the following options in the &quot;extra&quot;-section of your composer.json to get rid of this issue.</p> <pre><code class="language-yml"> "extra": { "drupal-scaffold": { "excludes": [ "sites/", ".htaccess" ] }, }</code></pre><img src="" height="1" width="1" alt=""/> Launching Agile Zürich!! Sun, 11 Feb 2018 00:00:00 +0100 <p>The story of the word “agile” started exactly 17 years ago when 17 practitioners met in Utah and drafted the famous <a href="">Manifesto for Agile Software Development</a>. That was the turning point and turned the IT industry completely upside down. Ever since it spreaded across various sectors.</p> <p>Surprisingly, this small gathering turned out to be a global earthquake of the way we see the world we live in. From finance to business to organizations, “agile” is everywhere now. The downside of this revolution is that the word “agile“ is now an adjective used and overused and abused everywhere as well. By becoming mainstream, some shortcuts were taken to expand it to mass adoption and its original flavor was diluted along the way.</p> <h2>LOST IN AGILITY</h2> <p>Over the years many communities emerged, gathering specialists around the new trending methods and games that are created every year. Around Zürich, a dozen of groups coexist around the same subject, “uncovering better ways of [working] by doing it and helping others do it“, as a tweak to the introduction of the manifesto.</p> <p>One can easily be lost into the multiplication of new names and subgroups. Especially when you are new to agility. Where to start? Where to meet practitioners?</p> <p>We miss a place to gather all together, to open the stage to newcomers, to share anecdotes and challenges. To freely talk about “agile”. To get back to the roots sometimes or to launch crazy ideas!</p> <p>An open space is definitely needed.</p> <p>Inspired by the successes of the strong Agile Tour communities around the globe... let’s launch Agile Zürich!!</p> <h3>OPEN SOURCE</h3> <p>Like open source software that anyone can inspect, modify, and enhance, Agile Zürich is an open source community. It is especially open for all. From any industry. From beginners to advanced practitioners, from curious minds to established professionals, from skepticals to believers… Agile Zürich intents to make people share.</p> <p>And because it is always better to start early, its membership is free for students, as well as for unemployed people.</p> <h3>BY AND FOR THE COMMUNITY</h3> <p>No company nor person owns it, it belongs to its members and intents to evolve through the years, based on its members’ actions.</p> <p>Agile Zürich is also not for profit so every cent is spent to make it live, to bring value to the community.</p> <h3>SHARING AND LEARNING TOGETHER</h3> <p>Agile Zürich is not about preaching “agile” either, it is about sharing stories and learning techniques to face the complexity of the world. No one is right here, and no one is wrong either.</p> <p><br /></p> <p><em>“At Agile Zürich, we are uncovering better ways of working by doing it and helping others do it.”</em></p> <p><a href="">Join the group now</a> and start sharing your thoughts! We’ll post the date of the first gathering soon, so let’s keep in touch!</p> <p>Also, <a href="">follow Agile Zürich on Twitter</a>.</p><img src="" height="1" width="1" alt=""/> Innovative web presence of Steps Thu, 08 Feb 2018 00:00:00 +0100 <p><strong>Anniversary year</strong><br /> The dance festival Steps celebrates its 30th birthday in 2018. Every two year, the platform for contemporary dance presents approximately a dozen dance companies throughout Switzerland.</p> <p><strong>Innovative design</strong><br /> The birthday present to Steps was an innovative, reduced design of the website The appearance is now completely responsive and therefore usable on mobile phones, tablets as well as at home on the desktop.</p> <p><strong>Challenging implementation</strong><br /> The implementation of the new design was particularly challenging in the development, as the new appearance consists of several micro animations. This also applies to the current date change in the schedule.</p> <p><strong>New CMS</strong><br /> The basis of the website was also renewed in the project: Steps now runs on the Sitecore Content Management System.</p> <p><strong>Cross-agency cooperation</strong><br /> The new Steps appearance was created in close cooperation with Migros in Lead, Y7K as design agency, Namics for the technical implementation in CMS and Liip for the front-end implementation.</p><img src="" height="1" width="1" alt=""/>