by Irina for IBA Group
Posted on October 28, 2014
Lavy Itzhaky, PMP®
In one of my last projects, where I was asked to step in as a project manager, there was almost everything to make the project a failure from the very beginning. The customer and management were unhappy, the project team was blamed for everything, and other small things topped the list of shortcomings. But eventually this project was submitted to the client on time and to the client’s satisfaction.
The secret in putting the failing project back on track is not in magic or sleepless nights or a magnificent project manager. In this particular project, the secret was in making people do their job and not to expect them to do something they were not hired for. You cannot expect a junior developer to have calls with the customer for clarifying the requirements or providing the project status. It’s not that I don’t trust the guys. They are great developers but they do not speak the same language the customer does.
As a friend of mine told me, a project team is an orchestra, where everyone in it has an individual role to play and there are people behind the scene who also contribute to the success of the orchestra performance, the and project manager is the conductor, who has to make sure that everyone is doing an assigned role. The Business Analyst gets the requirements from the customer and “translates” them to the developers, the Architect defines the architecture of the software solution, the developers develop it, and the testers test it.
In the above example, the main problem was with too many communication channels, when a developer talks directly with the customer and provides him or her with the project status, wrongly assuming the developer knows everything and not only the assigned part. This may serve as a recipe for misunderstanding and trouble in the project. Everyone in the project has to be responsible enough to do his/her own job and not let personal (possible) ambitions ruin project.
Everyone needs to do their own part in the “orchestra” of the project. They can and should evolve and learn new stuff but in cooperation with the “conductor”. Otherwise, it will negatively impact the project.
As I said in my post entitled Manager in every one of us, evolve yourself, become a better specialist, become a manager, but DON’T STOP!
Big Data sometimes appears to be a solution that is looking for a problem. It sometimes looks like a technology that has very little use in the real world of business and technologists are rushing around the world looking for examples of how it can be applied.
But I was having a conversation this week with a Big Data expert and I asked a question about customer service in retail – isn’t this one of the areas where Big Data is having the most impact. He agreed that it is one of the most affected industries, but for several reasons.
Everyone knows that customer service in just about every industry has changed. Consumer goods used to feature a telephone number or email address you could use to ask questions or complain. Now customers will use many different channels to comment on a product and many of them have no direct link to the manufacturer.
Customers today are familiar with at least six channels when contacting brands; email, voice, chat, Twitter, Facebook, forums and review websites. These are just the main channels being used now. Many brands are interacting with customers on other social networks, such as Pinterest or Instagram, and other communications tools, such as Whatsapp, are rapidly being adopted.
So customers are using many different ways to communicate. Often there is no formal notification to the brand involved – the brand is just expected to find the question online.
And now consider the retail industry. All these communication changes are taking place, but also the way people want to purchase items. They might buy in-store, online for delivery, online with collection in-store, they might want to return or exchange an item in-store even though it was purchased online.
The communication chain between a brand and the customer is far more complicated than a decade ago, but so is the supply chain. Enter Big Data. These real-life business problems are exactly where Big Data is moving from concept to daily use.
If you want to analyse a complex supply chain in real-time and explore how your customers prefer to shop, how they behave, where are items missing, then all these questions can only be analysed with an enormous data set that is constantly changing.
Likewise for the communications with customers. If they are communicating anytime from anywhere on any channel then there is an analysis function you need just for monitoring communications, but by employing Big Data techniques you can also predict and focus on the most important channels.
I think that 2015 will be the year when we finally stop talking about Big Data as the exception and start considering it just a part of business as usual – in any industry.
by Irina for IBA Group
Posted on October 6, 2014
I’ve written about Big Data in several blogs here. What it is. How it can be defined. And even how it can be used, but there are two additional factors that really help with any understanding of how Big Data fits into the modern organisation.
How do you make it work – what tools are needed to move from just having a large amount of data available to being able to gain insights from that data?
Do I have the kind of data that I can get insights from? What kind of insights am I expecting from the information that I have?
I have added two links to recent news stories that can elaborate further on these questions.
It is no use to anyone if you just have an enormous amount of data, but no tools to analyse it with. You can’t engage with Big Data using an Excel sheet. The volumes involved are often enormous and far more than what you could load into a computer to be studied as a single block of information.
So this is one of the first issues to address, and it may be where an expert partner can help the most. Is your data available? What tools can you use to collect together all that structured and unstructured data, and how can you even start to analyse it?
Now, what are the kinds of insights you can find? It depends on your business and the type of data available, but with Big Data you can uncover all kinds of relationships between variables that were not visible without the analysis.
This example of how Big Data is helping to predict where Ebola will strike next is a great example. It is just taking information we already have such as infections, locations of hospitals, number of doctors and so on, but using past knowledge and these factors to make predictions. Now imagine if you could start applying these insights in your business?
Making Big Data work with the right tools and determining the type of insight you need are two important factors in planning how you can make it work for you.
One of the latest trends in the IT industry, cloud computing is rapidly growing and has good prospects for the future. It represents a new kind of service that provides on-demand scalability, cost reduction, and a possibility to utilize IT resources efficiently.
Although cloud computing is currently on the outset, it has already proven to be a revolutionary turn in the IT industry. Moreover, cloud computing represents not only fundamental changes in IT, but also change in the business environment in general. The main underlying reason for business change is a wide range of benefits cloud provides for all types of enterprises, including SME and large-scale organization in terms of lower IT spending and wider business opportunities. All this makes cloud computing a game changer in the industry.
The European Commission may serve as an indicator of the pace of cloud adoption. It has already developed the EU Cloud Strategy to unleash the potential of cloud computing in Europe. The European Commission believes that cloud computing can increase productivity and create new businesses, services, and jobs.
What makes cloud computing so promising? It enables companies to reap a lot of benefits from highly valuable IT assets, including infrastructure resources, middleware, software, and computing resources without actually buying these assets but consuming them as a service.
For example, if a customer deploys software in a traditional way, it buys a license to acquire the software. With Software as a Service, customers do not need to own the license. They just pay a subscription fee instead. In other words, cloud computing is characterized by lower cost of entry and quicker ROI. As a result, organizations reduce IT-related costs and make IT assets more predictable.
Analytical agencies are thoroughly investigating the trend of cloud computing and related issues. IDC summarized that 81% of enterprises reported lower IT costs with cost reduction from 10 to 20% and 12% enterprises reported savings of 30% or more.
The significant savings encourage businesses to think about migration to a cloud service model. Traditional IT is not able to provide such cost savings due to higher entry costs and subsequent high expenditures on support, management, and maintenance activities. The scale of cost reduction in percentage experienced by businesses that adopted a cloud model is presented in the figure below.
To be a strong player in the market of cloud services, IBA Group is working to deepen its knowledge, master new skills, and gain wider experience.
IBA Group specialists are skilled in virtualization products and technologies, including IT infrastructure server virtualization platform (installation, configuration, management) of VMWare vSphere, systems of Windows Server HyperV and System Center, VMware EXS/EXSi, and KVM hypervisors. In addition, IBA experts are certified in ITIL v3 framework, which is applicable to cloud services.
IBA Group developed a proprietary solution called IBA Cloud Solution. IBA Cloud Solution provides a reliable network, computing, and disk architecture with backup and related software. IBA Cloud Solution offers migration from a current physical infrastructure to a virtual one in an easy step-by-step way. The solution is based on a reliable IBM Cloud&Smarter Infrastructure and uses outstandingly reliable IBM BladeCenter hardware. Virtualization is based on the leading virtualization platform VMWare vSphere.
For details of the EU cloud strategy, visit Unleashing the Potential of Cloud Computing in Europe
by Irina for IBA Group
Posted on September 11, 2014
As the economy develops, old jobs vanish and new ones are created. This process has always taken place as technology creates new needs and old skills are replaced. Just consider how important the blacksmith used to be before cars were commonly used and if someone described themselves as a blogger or flash developer in 1985 it would have made no sense – times change.
Big Data is another of these major changes. Not just in the sense that we are becoming able to analyse larger sets of data thanks to the technology becoming faster and more powerful – especially with more memory being available, but because the ability to do this work is now a skill itself. Understanding Big Data and being able to manipulate and analyse large sets of data is a popular skill to be exploring now as every analyst predicts that Big Data use is set to explode in the coming years.
The analyst firm IDC wrote a study in 2012 that predicted the amount of data available to analyse would increase 50 times by 2020. This prediction remains true – if anything it may be even more by 2020 as new applications that create data are launched all the time – from smart watches to other wearable technologies.
All this has led to a fear that there will not be enough people available to work on Big Data projects. McKinsey believes that the US will need almost 200,000 new data scientists by 2018 and the British Royal Academy of Engineering has predicted a need for 1.25 million graduates in science and technology subjects by 2020.
A recent column in the British ‘Daily Telegraph’ claimed that this shortfall in data scientists might even be a threat to the future of business. But what all these concerns in the UK and US often fail to acknowledge is that there are many other countries with an abundance of data scientists.
Offshore outsourcing has been long proven as a strategy for software development and other IT requirements. It will be exactly the same once data science becomes a mainstream part of every business. And companies like IBA Group are already doing it today.
We have talked about Big Data on this blog before and tried to define it in a way that doesn’t require complex terms, but it is not easy. Many people have very conflicting views on what Big Data is and how their company uses – or will use – it.
A fascinating feature article in the business journal Forbes explores 12 different definitions of Big Data, starting right from when the term was initially used in the 1990s. That’s right, we were all talking about Big Data back in the 90s – it’s not a recent term. The first known recorded use of the term was in a paper published by NASA in 1997 describing their problems of trying to work with enormous data sets that could not be loaded into a computer at once.
The Oxford English Dictionary is possibly the best place to turn for a simple non-technical definition: “data of a very large size, typically to the extent that its manipulation and management present significant logistical challenges.” That’s clear and focused, but also doesn’t really give away any clues about the scale of the challenge faced when manipulating many Big Data sets.
Wikipedia uses a very similar definition to the OED, but the advantage of Wikipedia is that the crowd updates it regularly. As attitudes to Big Data change in the IT marketplace, the online definition can change. The latest Wikipedia definition (last updated on Sep 1) says: “Big data is an all-encompassing term for any collection of data sets so large and complex that it becomes difficult to process using on-hand data management tools or traditional data processing applications.”
You can click this link to read the entire Forbes story, but I would be interested in your own views. Is it possible to define big data inside 140 characters? If you can, then why not tweet your answer to @ibagroup?
Artificial intelligence, intelligent self-learning machines, systems that can advise on how to do work better, and robotics – all of this has always been like magic to me. When I studied at the university, I was carried away by these topics. It is even more fascinating to use knowledge from one field for another and thus solve the tasks that seemed unsolvable.
What do you think about the interaction of artificial intelligence and the theory of evolution, one of the most interesting open issues in biology? As I worked with algorithms and not hardware, I kept wondering, how we can teach a computer to be intelligent. I did research on the topic within an internal project at IBA. This article gives an overview of Genetic Programming and my speculations on how to use it in software development.
As Wiki says, ‘Genetic Programming (GP) is an evolutionary algorithm-based methodology inspired by biological evolution to find computer programs that perform a user-defined task. Essentially GP is a set of instructions and a fitness function to measure how well a computer has performed a task. It is a specialization of genetic algorithms (GA) where each individual is a computer program. It is also a machine learning technique used to optimize a population of computer programs according to a fitness landscape determined by a program’s ability to perform a given computational task’.
In my view, GP is a way to find a solution using atomic user-defined blocks.
The following are the main principles of GP:
1. Initially, we are given past performance data to build the most suitable program for reproducing the same principles of data structure in the future. We assume that the initial data were produced by a kind of a black box. We have historical input and output data to reproduce this system with the same rules and principles for future use.
2. All programs are members of a population. It means that we get not the only solution, but a set of solutions and can choose the most suitable one.
3. Changes in a population are made using an iterative method. At each iteration, the programs that are most fit for crossover and replenishment of the population are selected.
4. A fitness function is used to determine, if a program is fit for the purpose. It is a user-defined metric that numerically presents the ability of a program to solve the defined task (to fit the mapping of the input and output parameters for a given data set).
5. The fittest individuals of a population are selected to develop the population just the way evolution selects species.
6. Changes that can be implemented in the surviving members are similar to biological evolution. A member can be mutated or crossed with another member of the population.
7. A stop condition is defined for a fitness function, when one can stop GP and pick up a solution.
To grow up a population, a programmer must define primitive blocks which will form an individual. These are terminals, including constants and variables, and primitive expressions such as +, -, *, /, cos, sin, if-else, foreach, and other predicates. Any program can be presented as a tree built up of these blocks. This way, any individual of a population can be presented.
We just take a randomly selected node from one individual and use it to replace a randomly selected subtree of another individual. That is a crossover. We can also take a randomly selected node of an individual and replace it with a randomly generated subtree. That is mutation.
Genetic Programming is used for neural network learning and numeric computing, as well to approximate complex functions. I was researching GP to imitate activity of a definite person. An employee while doing his or her job can make both optimal and non-optimal decisions, which makes human thinking different from digital technologies. When you are asked to point to the south, you won’t be able to do it without a mistake, and this ‘white noise’ is our individual quality. Sometimes, we do not need an accurate answer to solve a problem. After gathering the data, we can imitate this employee’s behavior for new tasks using a computer program.
Genetic programming is also useful when creating an artificial player for a game with different difficulty levels. The behavioral algorithms of an artificial intelligent player are typically ideal and therefore a human cannot beat it, if it didn’t play at give-away. Consequently, we need to make the artificial intelligence a little bit human, allowing it to make mistakes from time to time. To this end, the GP algorithms are in place because they teach the artificial intelligence human behavior, including the ability to make mistakes.
Isn’t that remarkable? I think it is real magic and those who create smart computers are magicians. Who knows, maybe it’s a way put a soul into a computer.
The Smart Data Collective blog recently published a view that there is too much jargon circulating in the industry related to Big Data. In fact, as I mentioned in my last blog, Big Data is itself a term that is often misunderstood and needs more clarity.
The blog is interesting because the author takes a good example of an over-used business term, ‘digital’, and explores what we mean when we read and use this term. Many of the definitions from the dictionary have nothing at all to do with the definition of digital business you might expect – modern, hi-tech, and connected.
In fact, many more terms are taken from the dictionary and bent and shaped into something new by technology companies. Innovate, disrupt, and thought leadership are all terms that mean something different if you are not working for a technology company, but how can we improve the use of Big Data as a term?
The advantage we have is that Big Data is a genuine and meaningful area of data science. It’s not just jargon created for use by MBA students as they discuss their plans for ‘wealth-generation’.
Big Data needs to be understood by the general public and by the company leaders that have never really felt that they had to understand technology before. But almost everyone has now used Facebook, or contacted a customer service centre, so it is becoming easier to connect the theory of how Big Data can be used to the ways in which people see it every day.
Big Data is a subject we have explored often on this blog because it’s an area where IBA has extensive experience and knowledge, but it is often difficult to explain. How big does a database need to be before it can be considered ‘Big Data’ and why do we need this separate terminology to refer to manipulating and analysing this data when relational databases have been in use for decades?
One example that goes a long way to answering these questions is the way that customer service is changing – especially for retailers. Products used to have a phone number and email address that customers could use to reach the manufacturer or retailer – usually to complain after a purchase.
Now, customers use online forums, review sites, Twitter, Facebook as well as the more traditional and direct channels such as online chat, emails, or a voice call. Customers are not always directly contacting a brand when they comment on a product, yet they usually expect a response.
Exploring this mass of information is a classic Big Data example. The retailers want to connect together all these communication channels into an ‘omnichannel’, yet this is impossible when they are considered to be regular databases.
If a customer emails a complaint, then tweets about the problem because their email is not answered and then finally calls because neither tweet nor email has been answered then the ideal situation for a retailer is that the agent on the phone knows about the earlier email and tweet.
But to make this work is not easy. The company has no control over Facebook or Twitter – it’s not internal data. And how can comments on a tweet be connected to a customer on the telephone?
All this is feasible, if you have enough information from various sources and you can analyse it quickly enough. Every company that interacts with their customers is now exploring this problem so maybe Big Data is about to hit the headlines again.
It is common knowledge today that communication is king. The word ‘communication’ refers to face-to-face, online, and telecommunications, video conferencing, and other communication methods, including texting or messaging.
Mobility is another aspect of our everyday life. Mobile devices embrace a growing number of life areas, making our life easier and less tied to specific places, be it home, office or something else.
Accordingly, a growing number of mobile applications have shown up, the old ones are increasingly replaced with new applications. However, mobile applications often fail to meet our expectations. It happens that the features that were accessible earlier and made the mobile applications so convenient are no longer available. Some are too hard to set up or tune. Others on the contrary are installed easily but have too many functions, most of which are useless. Finally, we begin looking for something new again.
This is also true about message exchange software. Depending on personal habits and preferences, everyone has his or her own choice of favorite messengers.
Taking into account our personal experience and preferences, IBA developed a mobile messaging system that functions on Android. IBM Lotus Sametime served as a starting point and a basis for the application.
IBM® Sametime® products integrate real-time social communications into business environment, providing a unified user experience through instant messaging, online meetings, voice, video and data. With just one click, you are immediately connected to the person behind the information, which helps you meet the ongoing demands of everyday business.
In the IBA messenger, we implemented a customary and useful set of functions, making the application easy to customize and utilize. As an option, we added connection with the address book of a mobile device and a possibility to make a mobile phone call to any contact in the address book.
After the application was published on PlayMarket, many people have been using it successfully. Numerous positive references testify to its usability and popularity.
At present, messaging systems have different approaches to interface, the used protocols, and methods of interaction. Producers and developers keep working to modify the existing applications and create new ones. Using these applications, it is possible to exchange text, images, audio and video files, conduct voice and video communication, and use many other functions.
IBA is also planning to expand the messenger’s functionality. Currently, the IBA team is working on the next release of the application to include new functions based on the user feedbacks from PlayMarket. The release will present such useful features as file transfer, extended status support, and automatic reconnection. We intend to launch the new release on PlayMarket in September, 2014.
In the near future, we are going to develop an iOS version of the application.
I invite you to try the messenger in use. You are also welcome to leave comments on how we can improve it