Could Big Data help to prevent the spread of Ebola?

by Irina for IBA Group
Posted on October 6, 2014

IBA Group
Mark Hillary

I’ve written about Big Data in several blogs here. What it is. How it can be defined. And even how it can be used, but there are two additional factors that really help with any understanding of how Big Data fits into the modern organisation.
How do you make it work – what tools are needed to move from just having a large amount of data available to being able to gain insights from that data?

Do I have the kind of data that I can get insights from? What kind of insights am I expecting from the information that I have?
I have added two links to recent news stories that can elaborate further on these questions.

It is no use to anyone if you just have an enormous amount of data, but no tools to analyse it with. You can’t engage with Big Data using an Excel sheet. The volumes involved are often enormous and far more than what you could load into a computer to be studied as a single block of information.

So this is one of the first issues to address, and it may be where an expert partner can help the most. Is your data available? What tools can you use to collect together all that structured and unstructured data, and how can you even start to analyse it?
Now, what are the kinds of insights you can find? It depends on your business and the type of data available, but with Big Data you can uncover all kinds of relationships between variables that were not visible without the analysis.

This example of how Big Data is helping to predict where Ebola will strike next is a great example. It is just taking information we already have such as infections, locations of hospitals, number of doctors and so on, but using past knowledge and these factors to make predictions. Now imagine if you could start applying these insights in your business?

Making Big Data work with the right tools and determining the type of insight you need are two important factors in planning how you can make it work for you.

Big Data: It’s a big deal only if you have the tools to use it
How big data could help stop the Ebola outbreak

Prosperous Future of Cloud Computing

by Irina for IBA Group
Posted on September 22, 2014

Zhanna Huziuk
IBA Group

One of the latest trends in the IT industry, cloud computing is rapidly growing and has good prospects for the future. It represents a new kind of service that provides on-demand scalability, cost reduction, and a possibility to utilize IT resources efficiently.

Although cloud computing is currently on the outset, it has already proven to be a revolutionary turn in the IT industry. Moreover, cloud computing represents not only fundamental changes in IT, but also change in the business environment in general. The main underlying reason for business change is a wide range of benefits cloud provides for all types of enterprises, including SME and large-scale organization in terms of lower IT spending and wider business opportunities. All this makes cloud computing a game changer in the industry.

The European Commission may serve as an indicator of the pace of cloud adoption. It has already developed the EU Cloud Strategy to unleash the potential of cloud computing in Europe. The European Commission believes that cloud computing can increase productivity and create new businesses, services, and jobs.

What makes cloud computing so promising? It enables companies to reap a lot of benefits from highly valuable IT assets, including infrastructure resources, middleware, software, and computing resources without actually buying these assets but consuming them as a service.

For example, if a customer deploys software in a traditional way, it buys a license to acquire the software. With Software as a Service, customers do not need to own the license. They just pay a subscription fee instead. In other words, cloud computing is characterized by lower cost of entry and quicker ROI. As a result, organizations reduce IT-related costs and make IT assets more predictable.

Analytical agencies are thoroughly investigating the trend of cloud computing and related issues. IDC summarized that 81% of enterprises reported lower IT costs with cost reduction from 10 to 20% and 12% enterprises reported savings of 30% or more.

The significant savings encourage businesses to think about migration to a cloud service model. Traditional IT is not able to provide such cost savings due to higher entry costs and subsequent high expenditures on support, management, and maintenance activities. The scale of cost reduction in percentage experienced by businesses that adopted a cloud model is presented in the figure below.

To be a strong player in the market of cloud services, IBA Group is working to deepen its knowledge, master new skills, and gain wider experience.

IBA Group specialists are skilled in virtualization products and technologies, including IT infrastructure server virtualization platform (installation, configuration, management) of VMWare vSphere, systems of Windows Server HyperV and System Center, VMware EXS/EXSi, and KVM hypervisors. In addition, IBA experts are certified in ITIL v3 framework, which is applicable to cloud services.
IBA Group developed a proprietary solution called IBA Cloud Solution. IBA Cloud Solution provides a reliable network, computing, and disk architecture with backup and related software. IBA Cloud Solution offers migration from a current physical infrastructure to a virtual one in an easy step-by-step way. The solution is based on a reliable IBM Cloud&Smarter Infrastructure and uses outstandingly reliable IBM BladeCenter hardware. Virtualization is based on the leading virtualization platform VMWare vSphere.

To see the details of predictions of IT cloud services spending for specific markets, follow the links to the IDC press release and to Research In Future Cloud Computing by IDC

For details of the EU cloud strategy, visit Unleashing the Potential of Cloud Computing in Europe

Will a skills shortage threaten the future of Big Data?

by Irina for IBA Group
Posted on September 11, 2014

IBA Group
Mark Hillary

As the economy develops, old jobs vanish and new ones are created. This process has always taken place as technology creates new needs and old skills are replaced. Just consider how important the blacksmith used to be before cars were commonly used and if someone described themselves as a blogger or flash developer in 1985 it would have made no sense – times change.

Big Data is another of these major changes. Not just in the sense that we are becoming able to analyse larger sets of data thanks to the technology becoming faster and more powerful – especially with more memory being available, but because the ability to do this work is now a skill itself. Understanding Big Data and being able to manipulate and analyse large sets of data is a popular skill to be exploring now as every analyst predicts that Big Data use is set to explode in the coming years.

The analyst firm IDC wrote a study in 2012 that predicted the amount of data available to analyse would increase 50 times by 2020. This prediction remains true – if anything it may be even more by 2020 as new applications that create data are launched all the time – from smart watches to other wearable technologies.

All this has led to a fear that there will not be enough people available to work on Big Data projects. McKinsey believes that the US will need almost 200,000 new data scientists by 2018 and the British Royal Academy of Engineering has predicted a need for 1.25 million graduates in science and technology subjects by 2020.

A recent column in the British ‘Daily Telegraph’ claimed that this shortfall in data scientists might even be a threat to the future of business. But what all these concerns in the UK and US often fail to acknowledge is that there are many other countries with an abundance of data scientists.
Offshore outsourcing has been long proven as a strategy for software development and other IT requirements. It will be exactly the same once data science becomes a mainstream part of every business. And companies like IBA Group are already doing it today.

Source: The skills shortfall that threatens our big data future

Defining Big Data

by Irina for IBA Group
Posted on September 4, 2014

IBA Group
Mark Hillary

We have talked about Big Data on this blog before and tried to define it in a way that doesn’t require complex terms, but it is not easy. Many people have very conflicting views on what Big Data is and how their company uses – or will use – it.

A fascinating feature article in the business journal Forbes explores 12 different definitions of Big Data, starting right from when the term was initially used in the 1990s. That’s right, we were all talking about Big Data back in the 90s – it’s not a recent term. The first known recorded use of the term was in a paper published by NASA in 1997 describing their problems of trying to work with enormous data sets that could not be loaded into a computer at once.

The Oxford English Dictionary is possibly the best place to turn for a simple non-technical definition: “data of a very large size, typically to the extent that its manipulation and management present significant logistical challenges.” That’s clear and focused, but also doesn’t really give away any clues about the scale of the challenge faced when manipulating many Big Data sets.

Wikipedia uses a very similar definition to the OED, but the advantage of Wikipedia is that the crowd updates it regularly. As attitudes to Big Data change in the IT marketplace, the online definition can change. The latest Wikipedia definition (last updated on Sep 1) says: “Big data is an all-encompassing term for any collection of data sets so large and complex that it becomes difficult to process using on-hand data management tools or traditional data processing applications.”

You can click this link to read the entire Forbes story, but I would be interested in your own views. Is it possible to define big data inside 140 characters? If you can, then why not tweet your answer to @ibagroup?

http://www.forbes.com/sites/gilpress/2014/09/03/12-big-data-definitions-whats-yours/

Genetic Programming

by Irina for IBA Group
Posted on August 28, 2014

IBA Group
Pavel Charnysh

Artificial intelligence, intelligent self-learning machines, systems that can advise on how to do work better, and robotics – all of this has always been like magic to me. When I studied at the university, I was carried away by these topics. It is even more fascinating to use knowledge from one field for another and thus solve the tasks that seemed unsolvable.

What do you think about the interaction of artificial intelligence and the theory of evolution, one of the most interesting open issues in biology? As I worked with algorithms and not hardware, I kept wondering, how we can teach a computer to be intelligent. I did research on the topic within an internal project at IBA.  This article gives an overview of Genetic Programming and my speculations on how to use it in software development.

As Wiki says, ‘Genetic Programming (GP) is an evolutionary algorithm-based methodology inspired by biological evolution to find computer programs that perform a user-defined task. Essentially GP is a set of instructions and a fitness function to measure how well a computer has performed a task. It is a specialization of genetic algorithms (GA) where each individual is a computer program. It is also a machine learning technique used to optimize a population of computer programs according to a fitness landscape determined by a program’s ability to perform a given computational task’.

http://en.wikipedia.org/wiki/Genetic_programming

In my view, GP is a way to find a solution using atomic user-defined blocks.

The following are the main principles of GP:

1. Initially, we are given past performance data to build the most suitable program for reproducing the same principles of data structure in the future. We assume that the initial data were produced by a kind of a black box. We have historical input and output data to reproduce this system with the same rules and principles for future use.

2. All programs are members of a population. It means that we get not the only solution, but a set of solutions and can choose the most suitable one.

3. Changes in a population are made using an iterative method. At each iteration, the programs that are most fit for crossover and replenishment of the population are selected.

4. A fitness function is used to determine, if a program is fit for the purpose. It is a user-defined metric that numerically presents the ability of a program to solve the defined task (to fit the mapping of the input and output parameters for a given data set).

5. The fittest individuals of a population are selected to develop the population just the way evolution selects species.

6. Changes that can be implemented in the surviving members are similar to biological evolution. A member can be mutated or crossed with another member of the population.

7. A stop condition is defined for a fitness function, when one can stop GP and pick up a solution.

To grow up a population, a programmer must define primitive blocks which will form an individual. These are terminals, including constants and variables, and primitive expressions such as  +, -, *, /, cos, sin, if-else, foreach, and other predicates. Any program can be presented as a tree built up of these blocks. This way, any individual of a population can be presented.


In this case, mutation and crossover are represented as following.


We just take a randomly selected node from one individual and use it to replace a randomly selected subtree of another individual. That is a crossover. We can also take a randomly selected node of an individual and replace it with a randomly generated subtree. That is mutation.

Genetic Programming is used for neural network learning and numeric computing, as well to approximate complex functions. I was researching GP to imitate activity of a definite person. An employee while doing his or her job can make both optimal and non-optimal decisions, which makes human thinking different from digital technologies. When you are asked to point to the south, you won’t be able to do it without a mistake, and this ‘white noise’ is our individual quality. Sometimes, we do not need an accurate answer to solve a problem. After gathering the data, we can imitate this employee’s behavior for new tasks using a computer program.

Genetic programming is also useful when creating an artificial player for a game with different difficulty levels. The behavioral algorithms of an artificial intelligent player are typically ideal and therefore a human cannot beat it, if it didn’t play at give-away. Consequently, we need to make the artificial intelligence a little bit human, allowing it to make mistakes from time to time. To this end, the GP algorithms are in place because they teach the artificial intelligence human behavior, including the ability to make mistakes.

Isn’t that remarkable? I think it is real magic and those who create smart computers are magicians. Who knows, maybe it’s a way put a soul into a computer.

Is jargon preventing an acceptance of Big Data?

by Irina for IBA Group
Posted on August 18, 2014

IBA Group
Mark Hillary

The Smart Data Collective blog recently published a view that there is too much jargon circulating in the industry related to Big Data. In fact, as I mentioned in my last blog, Big Data is itself a term that is often misunderstood and needs more clarity.

The blog is interesting because the author takes a good example of an over-used business term, ‘digital’, and explores what we mean when we read and use this term. Many of the definitions from the dictionary have nothing at all to do with the definition of digital business you might expect – modern, hi-tech, and connected.

In fact, many more terms are taken from the dictionary and bent and shaped into something new by technology companies. Innovate, disrupt, and thought leadership are all terms that mean something different if you are not working for a technology company, but how can we improve the use of Big Data as a term?

The advantage we have is that Big Data is a genuine and meaningful area of data science. It’s not just jargon created for use by MBA students as they discuss their plans for ‘wealth-generation’.

Big Data needs to be understood by the general public and by the company leaders that have never really felt that they had to understand technology before. But almost everyone has now used Facebook, or contacted a customer service centre, so it is becoming easier to connect the theory of how Big Data can be used to the ways in which people see it every day.

http://smartdatacollective.com/tracey-wallace/222436/big-data-jargon-we-all-need-reign-right-now

Understanding Big Data

by Irina for IBA Group
Posted on August 11, 2014

IBA Group
Mark Hillary

Big Data is a subject we have explored often on this blog because it’s an area where IBA has extensive experience and knowledge, but it is often difficult to explain. How big does a database need to be before it can be considered ‘Big Data’ and why do we need this separate terminology to refer to manipulating and analysing this data when relational databases have been in use for decades?

One example that goes a long way to answering these questions is the way that customer service is changing – especially for retailers. Products used to have a phone number and email address that customers could use to reach the manufacturer or retailer – usually to complain after a purchase.

Now, customers use online forums, review sites, Twitter, Facebook as well as the more traditional and direct channels such as online chat, emails, or a voice call. Customers are not always directly contacting a brand when they comment on a product, yet they usually expect a response.

Exploring this mass of information is a classic Big Data example. The retailers want to connect together all these communication channels into an ‘omnichannel’, yet this is impossible when they are considered to be regular databases.

If a customer emails a complaint, then tweets about the problem because their email is not answered and then finally calls because neither tweet nor email has been answered then the ideal situation for a retailer is that the agent on the phone knows about the earlier email and tweet.

But to make this work is not easy. The company has no control over Facebook or Twitter – it’s not internal data. And how can comments on a tweet be connected to a customer on the telephone?

All this is feasible, if you have enough information from various sources and you can analyse it quickly enough. Every company that interacts with their customers is now exploring this problem so maybe Big Data is about to hit the headlines again.

Messaging and messengers

by Irina for IBA Group
Posted on August 4, 2014

IBA Gomel
Ihar Kalesnik

It is common knowledge today that communication is king. The word ‘communication’ refers to face-to-face, online, and telecommunications, video conferencing, and other communication methods, including texting or messaging.

Mobility is another aspect of our everyday life. Mobile devices embrace a growing number of life areas, making our life easier and less tied to specific places, be it home, office or something else.

Accordingly, a growing number of mobile applications have shown up, the old ones are increasingly replaced with new applications. However, mobile applications often fail to meet our expectations. It happens that the features that were accessible earlier and made the mobile applications so convenient are no longer available. Some are too hard to set up or tune. Others on the contrary are installed easily but have too many functions, most of which are useless. Finally, we begin looking for something new again.

This is also true about message exchange software. Depending on personal habits and preferences, everyone has his or her own choice of favorite messengers.

Taking into account our personal experience and preferences, IBA developed a mobile messaging system that functions on Android. IBM Lotus Sametime served as a starting point and a basis for the application.

IBM® Sametime® products integrate real-time social communications into business environment, providing a unified user experience through instant messaging, online meetings, voice, video and data. With just one click, you are immediately connected to the person behind the information, which helps you meet the ongoing demands of everyday business.

In the IBA messenger, we implemented a customary and useful set of functions, making the application easy to customize and utilize. As an option, we added connection with the address book of a mobile device and a possibility to make a mobile phone call to any contact in the address book.

IBA Messenger screenshot1 IBA messenger screenshot 2 IBA messenger screenshot 3 IBA messenger screenshot 4

After the application was published on PlayMarket, many people have been using it successfully. Numerous positive references testify to its usability and popularity.

At present, messaging systems have different approaches to interface, the used protocols, and methods of interaction. Producers and developers keep working to modify the existing applications and create new ones. Using these applications, it is possible to exchange text, images, audio and video files, conduct voice and video communication, and use many other functions.

IBA is also planning to expand the messenger’s functionality. Currently, the IBA team is working on the next release of the application to include new functions based on the user feedbacks from PlayMarket. The release will present such useful features as file transfer, extended status support, and automatic reconnection. We intend to launch the new release on PlayMarket in September, 2014.

In the near future, we are going to develop an iOS version of the application.

I invite you to try the messenger in use. You are also welcome to leave comments on how we can improve it :)

https://play.google.com/store/apps/details?id=com.iba.gomel.andy.chat&hl=en

Big Data and Cloud in decision-making

by Irina for IBA Group
Posted on July 25, 2014

IBA Group
Aleš Hojka, CEO of IBA CZ
Vitězslav Košina, Business Consultant at IBA CZ

It is a reality today that organizations have to deal with a multitude of unstructured documents and other data. These data have true value, if they are properly and timely processed and extracted, and also are supplied with really useful links.

Almost any business has in some way implemented a data management system (DMS), a content management system (CMS) or a business intelligence (BI) solution. Unfortunately, a new system often provides a very low added value, especially when a large amount of data is involved. Getting reasonable output from unstructured data is problematic. Why is it so?

DMS, CMS, and BI systems can be really effective, if they meet two essential requirements. The first is a sufficiently flexible environment and quick access to information complemented by strong information security. A really flexible environment shoul be able to respond to resource and computing requests in a very short, practically in real time. This can be easily achieved by using a cloud-based solution. As these requests are dynamic and cannot be easily predicted at the time when the system is designed, cloud can be very instrumental. In addition, keeping resources for a «rainy day» is not an effective allocation of resources.

Therefore, cloud is a prerequisite for dealing with big data, but it is not the only one. In some cases, data and documents are not available anytime and anywhere without limitations, though we have an environment designed for large amounts of data. This is true.

It should be noted that effective decision-making involves an increasingly growing amount of information. As the information should be available on mobile phones and tablets, an information management system cannot transfer huge amounts of data and should have short response time. If we meet these two basic requirements, then we are poised for truly efficient extraction and processing of large amounts of data, which can be further aggregated and analyzed to make a grounded decision, deal, and etc.

For many companies, using Software as a Service (SaaS0 is a problem, because they have to do with sensitive commercial information or client data. From the perspective of a cloud-based solution, a dedicated cloud is needed as apposed to shared capacities.

Many organizations outsource only the infrastructure, while we recommend to outsource the entire solution. With a cloud solution, the issue of security is solved to the extent that is normal for corporate clients. In this case, only the needed capacity is commissioned and there is no need to reserve resources for the team that takes care of the infrastructure and applications. The organization can thus focus on business and management objectives, as well as provide added value to the clients.

With mobile devices, security is a very thorny issue because attacks on their security are more likely than on the computers that are located in the company’s premises, where they are protected physically. The good news is that these devices are so powerful that allow for implementing the strongest encryption and other elements of modern security.

Taking into account the links between documents and their associated metadata, as well as other data sources, cloud computing is the best solution one can choose. It is however possible, if needed, to move from SaaS to the model, in which the solution is managed by the customer and yet not loses the flexibility that cloud offers us in terms of resources. Using cloud is much cheaper than building one‘s own infrastructure. Moreover, it enables organizations to concentrate on their requirements and business needs instead of specific software or infrastructure.

Internet of Things

by Irina for IBA Group
Posted on July 18, 2014

IBA Group
Mark Hillary

I have written recently about Business Analytics (BA). What is BA? How does it affect your IT strategy and your business in general? I have also observed that there is a relationship between BA and Big Data (BD) – they are related concepts.

To clear up any confusion, I would say that BA is related to taking a set of data, performing a modelling operation, and using the model to predict some kind of future state – what-if calculations. BD is more of a continuous analysis of very large-scale business information.

But the business concept that is driving forward the important of both Business Analytics and Big Data is really The Internet of Things (IoT).

http://www.forbes.com/sites/mikekavis/2014/06/26/the-internet-of-things-will-radically-change-your-big-data-strategy/

Even for a fairly short blog post, this is already starting to fill up with three-letter-acronyms so let’s define what the IoT really is. More and more devices are capable of communication using the existing Internet infrastructure. It used to be computers that we would connect to the Internet, then laptops, then smart phones. Now it is tablets, ebooks, televisions, and every corporate electronic system you can think of – from security systems to electricity meters to photocopiers.

This revolution in making almost every device connect to the Internet is the starting point for the IoT. The classic consumer example is usually the connected fridge that can recommend a dinner based on what is inside, though a more useful example might be your car diagnosing a problem and communicating with the service centre without your own interaction.

In 1999, about 250mb of data per person was created each year. By 2012 ten times this amount of data per person was being generated. Data creation is increasing and the speed of increase is accelerating. Every day people are generating data with their smart phones without doing anything – just by switching it on, connecting to the Internet and allowing applications to work in the background.

This change in both the consumer and corporate environment is driving both the need for continuous Big Data analysis and also the ability to predict what may happen next based on Business Analytic tools.

Forbes: A short history of Big Data