Tredence http://www.zhazhai36.com An Analytics Services and Solutions Company Thu, 12 Mar 2020 07:51:31 +0000 en-US hourly 1 https://wordpress.org/?v=5.2.2 How AI Application is set to revolutionize the CPG Industry http://www.zhazhai36.com/blog/how-ai-application-is-set-to-revolutionize-the-cpg-industry/ http://www.zhazhai36.com/blog/how-ai-application-is-set-to-revolutionize-the-cpg-industry/#respond Wed, 05 Feb 2020 08:40:33 +0000 http://www.zhazhai36.com/?p=8534 The era of Artificial Intelligence (AI) has already begun, and various industries are investing heavily in creating self-evolving AI applications. The CPG industry (consumer packaged goods) is carefully taking note and slowly establishing new applications to improve efficiency and decrease expenses. Businesses on the frontline of AI application ...

The post How AI Application is set to revolutionize the CPG Industry appeared first on Tredence.

]]>
#listblg2 ul { list-style-type: none!important; } #listblg2 li:before { content: "- "; color: #ef7f1a; font-size: 36px; }

The future is Artificial Intelligence (AI), and various industries are investing heavily in creating self-evolving AI applications. The CPG industry (consumer packaged goods) has till now been somewhat dormant and is looking at opportunities to improve efficiency and reduce expenses. Amazon, Microsoft and Facebook have been among the front runners in this domain. Amazon, for instance, spends more than 10% of its yearly revenues on Tech research, whereas top CPG companies are still at 1-2%. This is changing and changing fast, more and more CPG players recognize the hidden power and are putting in robust efforts in the area.

Expansion of AI Application in the CPG Industry

Overall, the possibilities of AI application in the CPG industry is infinite. However, presently the state of AI application is still at a stage of infancy, lagging far behind other sectors such as retail and technology. Even though investment in AI from CPG companies has considerably increased, most companies are still working on identifying the critical applications with high business impact. In 2015, CPG firms, on average, spent 0.66% of revenue on AI application. This percentage is expected to keep increasing until a precise evaluation of their AI maturity is established. This article will discuss the areas in which CPG companies can expect to find successful applications of AI.

Consumer Feedback

Receiving feedback from customers at a massive scale usually involves leveraging natural language processing (NLP) programs for sentiment evaluation. Fundamentally, NLP focuses on teaching a machine to infer the gist of raw text. It is tremendously valuable but more complicated and resource-demanding than processing structured data. Structured data is greatly systematized and easily cognized by machine language. For instance, an AI program will be easily able to compute names, credit card numbers, geo-locations, stock data, etc. Analyzing customer sentiment, on the other hand, requires much more resources. The analysis is only the first step. A comprehensive AI-powered system must also be able to integrate ways to convey this analysis to the company’s customer feedback manager in plain and simple terms so that essential modifications can be made.

For instance, Hitachi devised a way to analyze customer feedback in a bid to reduce food wastage. They conducted a test in a hospital where trolleys mounted with cameras were used to collect trays from patients. The camera clicked images of the leftovers, and machine learning was used to detect the patterns of leftovers. In future servings, these wasted food items were not included in the patients’ meals.

Supply Chain

Another heavily researched field of AI application in the CPG industry is forecasting consumer demands. The data of wasted and sold food items in the past can help businesses efficiently forecast market demands. At the retail level, supermarkets will be able to stock precise amounts of food, considerably reducing wastage and avoiding stock shortages. CPG companies can easily supervise product locations and stock availability using AI tools that will eliminate the need for manual labor and boost effectiveness in logistics.

For example, when a major apparel company was faced with exceeding supply chain expenses, their products were not being able to reach potential customers, making lost sales a critical problem. Even a 1% recovery could provide a substantial increase in its yearly revenue. The company implemented AI to examine its products and discover how in-demand they were in the eyes of the customer. This application of AI was able to forecast precise classifications by store and by item. The AI program was able to predict which store would sell which item, ranking each item in terms of expected demand. Based on this analysis, the company was able to cut down on excess inventory and provide its customers with improved product-availability.

Marketing

A recent Nielsen survey revealed that over seventy per cent of CPG investment in marketing fail to breakeven. The major problem that CPG companies have when it comes to marketing is that they fail to integrate forecasting and planning in order to find the best promotional solutions. AI has the capability to introduce a data-driven method for CPG industry marketers. Powered by historical data, they can easily detect which marketing avenues are expected to generate maximum returns. An efficient program will be able to forecast and make recommendations on whether an in-store marketing tactic like – ‘buy two, get one free offer’ is the most effective for a specific product or brand, or if television advertisements will give the desired results.

AI programs can evaluate thousands of scenarios, incorporating even the littlest amounts of data before providing the ideal suggestion on which promotional channel will deliver the best results. Providing marketers with vital data like this is the only way for CPG companies to implement operationalized marketing projects on a large-scale yet cost-effective manner.

Pitfalls to avoid

CPG companies find themselves in a perfect place to make the most out of the AI boom. The technology is widely accepted as useful, and there are several verified methodologies shaped by other industries. By analyzing companies that are already reaping the benefits of AI application, the key takeaways include –

  • AI as a means, not an end – CPGs can apply AI to various aspects of their business operations and get augmented results. However, this is only possible when AI application is treated as a means to help workers, not eliminate them. For instance, when applying AI in marketing, any substantial discoveries or predictions should be provided to the experts in the marketing team so that they can then make even more informed decisions.
  • Streamlined Approach – CPG companies should avoid incorporating AI into every aspect of the business. Launching ten initiatives at once will more than likely result in those projects being stuck in the development phase for the next ten years. Companies must narrow down on one or two aspects of their business in order to have a better chance of delivering mass-scale results.

CPG company heads must stop viewing AI investment as “research projects” and welcome it as a way of carrying out day to day business tasks. Accepting the use of data-driven models in departments where employee intuition has always led operations can be a challenging and combative change. There is lot that CPG companies can achieve by using AI applications to support business but choosing the right initiative may be the key to its successful implementation.

The post How AI Application is set to revolutionize the CPG Industry appeared first on Tredence.

]]>
http://www.zhazhai36.com/blog/how-ai-application-is-set-to-revolutionize-the-cpg-industry/feed/ 0
Transformation of Model Factory in the age of AI http://www.zhazhai36.com/blog/transformation-of-model-factory-in-the-age-of-ai/ http://www.zhazhai36.com/blog/transformation-of-model-factory-in-the-age-of-ai/#respond Wed, 18 Dec 2019 09:20:42 +0000 http://www.zhazhai36.com/?p=8120 AI has become the pillar of growth for companies when it comes to maintaining relevance as well as an edge over the competition. What’s more, AI based models have become the new revenue drivers for companies looking to capitalize on data as a competitive advantage. The rise in algorithmically driven successes can be attributed primarily to enhancements...

The post Transformation of Model Factory in the age of AI appeared first on Tredence.

]]>
Sanat Pai Raikar
Sanat Pai Raikar
Associate Principal, Tredence

AI has become the pillar of growth for companies when it comes to maintaining relevance as well as an edge over the competition. What’s more, AI based models have become the new revenue drivers for companies looking to capitalize on data as a competitive advantage. The rise in algorithmically driven successes can be attributed primarily to enhancements on the hardware side. Big data tools, and an infrastructure based on both on-premise and cloud services, have paved the way for this fully evolved AIML ecosystem.

According to a study, AI is the next digital frontier and organizations that leverage models have a 7.5% profit margin advantage over their peers. With AI models becoming the key pillar for building valuable IP and revenue, Tredence shows the way with a new approach to model management.

With more research being plowed into tweaking neural networks, businesses face a bunch of tricky questions-how profitable it is to go full ML? Is the available compute infrastructure sufficient enough to take the leap? Can the deployed model adjust to the changing grounds and business requirements?

From training personnel to acquiring tools, business leaders are also grappling with critical questions related to model management — model validity in the face of changing business realities. Models lose validity over time as market realities change, new contingencies emerge, and new variables come into the picture. Hence all models need to be revamped and refreshed regularly to ensure they remain relevant. However, the refresh process is often manual and possesses a lot of scope for improvement.

Tredence employs machine learning algorithms to develop analytics solutions for its customers. Our solutions range from providing prediction frameworks for online retailers in the US to cutting costs for manufacturers of thermal insulation materials. We are embracing a factory approach to building AI models.

Need For A Move to a Factory Approach

There are multiple reasons models needs to move to a factory approach. Setting up models for the first time is a highly ad hoc process which is over-dependent on the skill of the data scientist building the model. The process is also highly susceptible to human biases and is very labor intensive. Model refreshes, on the other hand, are reactive and end up following a blind process, and remain labor intensive.

The term ML model refers to the model artefact that is created by the training process. The training data must contain the correct answer, which is known as a target or target attribute.
The learning algorithm finds patterns in the training data that maps the input data attributes to the target (the answer to be predicted), and it outputs an ML model that captures these patterns. A model can have many dependencies and to store all the components to make sure all features available both offline and online for deployment, all the information is stored in a central repository.

The new set up for a model factory approach should start with a strong clarity about the business requirements and environment. When building the model for the first time, the bounds for the model should be clearly defined, and the best model identified. If necessary, an ensemble of multiple models should be used. A good model can be identified basis multiple criteria, such as quality metrics, cumulative gains, heat maps, bootstrapping methods and other techniques.

The model refresh process should go through the following steps:

  • Define frequency of refresh, as well as exception conditions under which an out-of-turn refresh must be done
  • Define when the refresh will occur – is it when the current scenarios repeat, or when new scenarios emerge
  • Automate the refresh process, with clear bounds of the process defined. Data collection, splitting the dataset into training and validation samples, running the models, and validating and analyzing them for accuracy, are all steps than can be automated.

Importance of the AI-human Interface

The ultimate goal of any AI research is to derive insights about the business. Highly accurate AI models are usually harder for a human (especially a non-data scientist) to interpret, so the right model which balances accuracy vs. interpretability should be deployed. Since the eventual value of a model lies in its usage by business teams to meet targets or achieve goals, human review and understanding of models is essential. The model factory is intended to save human time in refreshing models through automation. This human time can in turn be used to analyze and derive the right insights from the mode results.

Since humans are the ones to benefit from this, there is a rising need for the augmentation of AI. Tredence has developed augmented intelligence solutions like Word Craft, Sancus and DCC (Digital Content Categorizer). For instance, Sancus is a machine learning and deep learning-based data management solution that aims to deliver reliable data to businesses.

Future Direction

Traditional data storage and analytic tools can no longer provide the agility and flexibility required to deliver relevant business insights. An AIML based factory model approach augmented within human intelligence can help organizations overcome maintain competitiveness and relevance. Organizations seeking transition to an AIML based model factory setup can get an idea of how to scale by looking at Tredence’ s approach.

The post Transformation of Model Factory in the age of AI appeared first on Tredence.

]]>
http://www.zhazhai36.com/blog/transformation-of-model-factory-in-the-age-of-ai/feed/ 0
Do I need to be a programmer for a career in Data Science? http://www.zhazhai36.com/blog/for-a-career-in-data-science-do-i-need-to-be-a-programmer/ http://www.zhazhai36.com/blog/for-a-career-in-data-science-do-i-need-to-be-a-programmer/#respond Mon, 16 Dec 2019 07:16:33 +0000 http://www.zhazhai36.com/?p=8108 Priya, a budding data scientist, was upset when she was bombarded with programming-related questions in her recent job interview. “I spent the last two years working on various modelling techniques, but now I am being asked questions about Python? I would like to build my career in data science and not in application development,” she said, with genuine doubts...

The post Do I need to be a programmer for a career in Data Science? appeared first on Tredence.

]]>
Viswanath RT
Viswanath RT
Associate Principal

Priya, a budding data scientist, was upset when she was bombarded with programming-related questions in her recent job interview. “I spent the last two years working on various modelling techniques, but now I am being asked questions about Python? I would like to build my career in data science and not in application development,” she said, with genuine doubts about her choice. The interview was held by a leading unicorn in the startup space, and she had reasons to believe this practice was not an exception. After all, her friends had similar experiences in other setups.

This is a common dilemma faced by folks who are beginning their careers. What should young data scientists focus on — understanding the nuances of algorithms or faster application of them using the tools? Some of the veterans see this as an “analytics vs technology” question. However, this article agrees to disagree with this concept. We will soon discover the truth as we progress through the article. How should you build a career in data science?

Analytics evolved from a shy goose, a decade back, to an assertive elephant. The tools of the past are irrelevant now. Some of the tools lost market share, their demise worthy of case studies in B-schools. However, if we are to predict its future or build a career in this field, there are some significant lessons it offers.

The Journey of Analytics

A decade back, analytics primarily was relegated to generating risk scorecards and designing campaigns. Analytical companies were built around these services.

Their teams would typically work on SAS, use statistical models, and the output will be some sort of score -risk, propensity, churn etc. Its primary role was to support business functions. Banks used various models to understand customer risk, churn etc. Retailers were active in their campaigns in the early days of adoption patterns.

And then “Business Intelligence” happened. What we saw was a plethora of BI tools addressing various needs of the business. The focus was primarily in various ways of efficient visualizations. Cognos, Business Objects, etc. were the rulers of the day.

But the real change to the nature of Analytics happened with the advent of Big Data. So, what changed with Big data? Was the data not collected at this scale, earlier? What is so “big” about big data? The answer lies more in the underlying hardware and software that allows us to make sense of big data. While data (structured and unstructured) existed for some time before this, the tools to comb through the big data weren’t ready.

Now, in its new role, analytics is no more just about algorithmic complexity. It needs the ability to address the scale. Businesses wanted to understand the “marketed value” of this newfound big data. This is where analytics started courting programming. One might have the best models, but they are of no use unless you trim and extract clean data out of zillions of GBs of data.

This also coincided with the advent of SaaS (Software as a service) and PaaS (Platform as a service). This made computing power more and more affordable.

By now, there is an abundance of data clubbed with economical and viable computing resources to process that data. The natural question was – What can be done with this huge data? Can we perform real-time analytics? Can the algorithmic learning be automated? Can we build models to imitate human logic? That’s where Machine Learning and Artificial Intelligence started becoming more relevant.

What then is machine learning? Well, to each his own. In its more restrictive definition, it limits itself to situations where there is some level of feedback-based learning. But again, the consensus here is to include most forms of analytical techniques into it.

While the traditional analytics need a basic level of expertise in statistics, you can perform most of your advanced NLP, Computer vision etc. without any knowledge of their details. This is made possible by the ML APIs of Amazon/Google. For example, a 10th grader can run facial recognition on a few images, with little or no knowledge of Analytics. Some of the veteran’s question if this is real analytics. Whether you agree with them or not, it is here to stay.

The Need for Programming

Imagine a scenario where your statistical model output needs to be integrated with ERP systems, to enable the line manager to consume the output, or even better, to interact with it. Or a scenario where the inputs given to your optimization model change in real-time, and model reruns. As we see more and more business scenarios, it is becoming increasingly evident that embedded analytical solutions are the way forward. the way analytical solutions interact with the larger ecosystem is getting the spotlight. This is where the programming comes into the picture.

For an analytics solution to be scalable, technology is crucial. While Priya might be jolted by this revelation, she is one of the lucky ones to realize this early in the career. Now she has the tools to redraw her career.

The post Do I need to be a programmer for a career in Data Science? appeared first on Tredence.

]]>
http://www.zhazhai36.com/blog/for-a-career-in-data-science-do-i-need-to-be-a-programmer/feed/ 0
Identifying opportunities in outbound transportation through load consolidation http://www.zhazhai36.com/blog/identifying-opportunities-in-outbound-transportation-through-load-consolidation/ http://www.zhazhai36.com/blog/identifying-opportunities-in-outbound-transportation-through-load-consolidation/#respond Thu, 26 Sep 2019 10:48:18 +0000 http://www.zhazhai36.com/?p=7727 Have you ever wondered how the complexity in the supply chain ecosystem is influencing your freight spend? What are the factors affecting the overall freight management curve in the supply chain? How can an enterprise meet the 7Rs of Logistics whilst handling a huge amount of customer data...

The post Identifying opportunities in outbound transportation through load consolidation appeared first on Tredence.

]]>
Shubhankit Verma
Shubhankit Verma
Manager – Supply chain

Have you ever wondered how the complexity in the supply chain ecosystem is influencing your freight spend? What are the factors affecting the overall freight management curve in the supply chain? How can an enterprise meet the 7Rs of Logistics whilst handling a huge amount of customer data and external factors affecting freight management? These are some of the common challenges Supply chain companies face (especially, during the outbound transportation process). Some of the common impediments in transportation management can be listed as follows:

  • Management of free or wasted space during the shipping process,
  • Saving fuel costs, including expenses incurred during environmental checks/ regulations, fuel surcharges, etc.
  • Managing manpower to deliver the product at the right time, etc
  • Utilizing a thorough and transparent shipment process can help eliminate challenges like empty spaces in trucks. Supply chain management is a complex process which includes several components to be addressed at different levels. Logistic load consolidation is a popular trend in the T&L segment. It offers the Supply chain industry with a great advantage, for it saves cost.

    Identifying a solution to evaluate & consolidate customer orders as well as save costs at the same time are the current shortcomings of a TMS (Transportation Management System) in the supply chain. So, how can an enterprise tackle this challenge while using technology?

    Load Consolidation: Effect of Digital Disruption in the Freight and Logistics Industry.

    The time-to-market goods with low (or no) defects, is one of the key challenges experienced by a supply chain enterprise. ?To meet customer demands vis-à-vis their in-time delivery expectations, along with a reduction in freight costs, companies are resorting to strategic load consolidation solutions in recent times.

    What is Load Consolidation? Can it really help bring down the huge shipping costs?

    Some of the key points to be considered in load consolidation include:

    • More volume per truck means less dollar spent per pound
    • Lesser truck means more organized operations
    • Lesser truck availability problems

    How to make sure that there is consolidation opportunity? What are the early indicators?

    • One of the commonly used terms in freight consolidation is load profile. Capturing the load requirement forecasts based on the existing or released orders, defines load profile in its crudest form. Can the analysis of a shipment load profile help understand consolidation opportunity? The answer is Yes!…

    If load profile is skewed towards lower tier LTLs (Less than Truck Loads) rather than Higher tier LTLs or FTLs (Full Truck Load), the possibility of merging loads and moving them from lower tiers to higher ones is quite high.

    *Load Tier is assessed using the total weight of a load being shipped. Typically, 1-200 lbs of weight is Tier 1 load whereas 10K-15K lbs is considered as Tier 6 load. Anything beyond 15K lbs is usually economical to be sent as Full Truck Load.

    • If you have multiple Distribution Centers (DCs) with item mirroring of less than 70-80%, there is an opportunity to create multi-pick consolidation scenarios

    *Item Mirroring is a concept used in inventory management practices. In this process, an SKU item can be mirrored to be stocked in specific locations based on customer distribution. If there are more DCs, the percentage of mirrored SKU products across all locations must come down.

    • If you have customers in clusters

    • If your customers order has a known pattern
    • If your average truck utilization is less than 60-70% (LCL or LTL)

    What are the ways to consolidate load?

    Within the logistics landscape, freight consolidation (a.k.a., Load Consolidation), freight procurement and its distribution play a pivotal role in saving transport costs for the company. Based on the availability of DCs and SKUs and the pattern of customer distribution (drop locations), the load consolidation practice can be categorized into:

    Wait and Consolidate: This method can be applied in cases with known customer order pattern. The retailer can wait until the complete order is received (not in parts but in its entirety) and then place the shipment as an FTL.

    Multi Pick Single Drop (or MPSD): This technique is applied when there is a non-availability of DCs in the surrounding area or a lack of “mirror-worthy” items. The retailer can use this opportunity to get the truck pick up the items/goods from multiple DCs and drop them at the customer’s doorstep.

    Single Pick Multi Drop (or SPMD): During an increase in the number of drop locations (more customers from the same neighborhood), a retailer can use this method and ship the FTL (from one warehouse) to drop at multiple locations at one go.

    Multi Pick Multi Drop (or MPMD): This method is similar to MPSD (Multi Pick Single Drop). When compared to MPSD, here the SKUs are collected from multiple DCs and dropped at multiple customer locations (in the same neighborhood).

    How to bring actionability: Execution plan in freight management?

    The load consolidation in supply chain can help reduce operating costs and increase service levels at a significant rate when compared to the conventional supply chain approach. Here’s how load consolidation can dynamically provide effective, actionable outcomes in freight management.

    • Identification of the top lanes for Single Pick and Single Drop scenarios.
    • Identification of the top lanes for Multi-Pick scenarios.
    • Identification of the top lanes for Multi-Drop scenarios.

    Shipment or load consolidation does not only help reduce logistics costs but is instrumental in managing greenhouse gas emission (GHG) risks.

    All in all, the contribution of data analytics in supply chain management is worth a mention, for it has significantly improved and optimized logistics operations with the help of techniques like load consolidation.

    So, what is your take on this simple yet effective mechanism to reduce costs and improve efficiency of operations?

    Please share your inputs in the comments section…

     

    The post Identifying opportunities in outbound transportation through load consolidation appeared first on Tredence.

    ]]> http://www.zhazhai36.com/blog/identifying-opportunities-in-outbound-transportation-through-load-consolidation/feed/ 0 The Supply Chain AI Hype and the Importance of Supply Chain Control Tower in its Digitized Form http://www.zhazhai36.com/blog/the-supply-chain-ai-hype-and-the-importance-of-supply-chain-control-tower-in-its-digitized-form/ http://www.zhazhai36.com/blog/the-supply-chain-ai-hype-and-the-importance-of-supply-chain-control-tower-in-its-digitized-form/#respond Wed, 04 Sep 2019 12:52:28 +0000 http://www.zhazhai36.com/?p=7008 The hype around Artificial Intelligence is far from fizzling out anytime soon. Digitalization and big data have completely penetrated the supply chain industry and are ubiquitous in nature. This article discusses one of the more interesting trends in the current supply chain analytics space – The Control Tower.

    The post The Supply Chain AI Hype and the Importance of Supply Chain Control Tower in its Digitized Form appeared first on Tredence.

    ]]>
    Bhaskar Seetharam
    Bhaskar Seetharam
    Associate Principal – Supply chain

    The hype around Artificial Intelligence is far from fizzling out anytime soon. Digitalization and big data have completely penetrated the supply chain industry and are ubiquitous in nature. This article discusses one of the more interesting trends in the current supply chain analytics space – The Control Tower.

    The concept of Air Control Towers and the Evolution of Digital Control Towers in Supply Chain

    One may wonder if supply chain control towers have any correlation with air traffic controllers? To be honest, yes, there is!

    An air traffic control tower (ATC), is a service provided by on-ground staff (controllers), who direct aircraft on the ground and through controlled airspace; they can provide advisory services to aircraft in non-controlled airspace. The primary purpose of ATC worldwide is to prevent collisions, organize and expedite the flow of air traffic, and provide information and other support for pilots (wiki). In short, the tower helps Improve flow, reduce emergency like situations through tactical interventions and provide inputs for right decision making. In fact, the ATC’s can now be enabled for an ‘auto-pilot’ mode wherein complex decisions are taken without human interventions. Only in cases where there is an absence of reliable data to make a trade-off, is where the humans intervene.

    The digital control towers aim at keeping a bird’s eye view on the events occurring within the supply chain ecosystem (controlled and uncontrolled space), with the modus operandi being very similar to a generic air traffic controller. With the help of this consolidated view generated by the digital control towers in supply chain one can gain powerful insights about the current happenings within the organization. These insights help in improving flow across the organization, reducing urgencies and providing insights and tactical support to supply chain managers to make effective decisions. In fact, in the longer run, very similar to the ATC’s of today, the Supply chain control towers should have the capability to make complex decisions when there is adequate reliable data.

    Significance of Digital Control Towers in Supply Chain

    Corporations today want to leverage the useful applications of the supply chain control tower. Organizations have copious amounts of data across their supply chain and related functions. Over the past few years, they have managed to build business intelligence and analytics solutions to drive decision-making but at a node level. Extracting valuable insights using the right sets of data, lying across various nodes in an organization while also utilizing market intelligence, to deliver real-time visibility and provide meaningful insights that can drive decisions that are optimal cross organization, is the need of the hour. E.g., with the expected slow-down in sales on specific SKUs, a client may wonder if their manufacturing plant need to continue producing to plan OR does it make sense to course correct and lose capacity?

    While an ATC is designed to minimize errors by incorporating huge factors of safety and commonly understood rules of engagement between various players (airlines, pilots, other ATCs), supply chain digital control towers have the luxury to experiment under statistical variability. E.g., Try different stock norms and check the impact on service levels, see whether a reduced Order-to-delivery promise induces better productivity and hence improved customer service levels and so on. This ability of a supply chain to experiment, try and fail or succeed quickly, at nominal cost can help build a virtuous loop of innovation with in a supply chain and drive a cultural change.

    Most organization today recognize the impact a control tower can have on their organization. For a global organization, it is probably one of those platforms that will steer the supply chains of the future. Many organizations have tried implementing a control tower, but there have been very few examples of success. More often than not, organization fall short of implementing a “gold-standard” control tower capable of – real time visibility, predictive alerting, identifying bottle-necks to supply chains and providing insights that can drive decision; instead they end up implementing a large set of dashboards, that showcase different KPIs important to the various nodes in a supply chain.

    This possibly is because of challenges that are faced when implementing an initiative as large as a control tower.

    How does a Digital Supply Chain Tower work?

    The SCCT should help an organization in making 3 key decisions – a) Ensure smooth flow-paths across the supply chain, b) Identify or predict bottlenecks / constraints to flow, and c) Derive efficiency/utilization improvement opportunities in the current network.

    Hence, some of the key functionalities that are required would be:

    • End to end data connectivity: Ability to go beyond creating reports and tools that are not unidimensional but are able to work with data from different nodes in a supply chain is important.
    • Visibility: SCCT should provide visibility of key supply chain KPIs (simple and complex KPIs). They should showcase the right metrics, while also be able to project the impact of a decision on the metric real-time.
    • Analytics: Supply chain control towers are equipped with and boast of analytical tools and applications. With the help of these tools, supply chain managers can easily run what-ifs, and take calculated decisions. They can, easily harness the power of predictive analysis to detect ‘tripping points’, identify triggering alerts, as well as conduct root cause analysis of the data to arrive at solutions and address challenges.
    • Execution: The real benefit of the SCCT lies in the way the control tower communicates with the executive and the operational teams across the supply chain and allied functions. Hence this an important aspect of SCCT adoption within an organization

    Key Challenges to Implementing a Supply Chain Control Tower

    The supply chain control tower, unlike a typical analytics project, entails involvement from multiple functions and geographies across the supply chain (like the involvement of multiple VPs/SVPs in large enterprises).

    Implementing SCCT would mean working with a team having – a) Different priorities, b) Very different data maturity and data quality, and c) Different products and software (some archaic and some new-age).

    Some of the key challenges that appear during the construction and execution of SCCTs include –

    • When the SCCT implementation is picked up as a priority exercise by a single function within a supply chain without getting the other key function buy-in early into the transformation, there is a high chance the implementation will hit multiple road blocks.
    • Many a times, people tend to implement the most complex piece OR the piece of SCCT that seems most interesting. This may lead to no tangible results for an extended period, thus leading to lack of enthusiasm from fringe teams.
    • Data maturity: Different functions may have different levels of data maturity (availability, quality etc.). Inability to assess and map this aspect will tend to escalate timelines and cost.
    • Sometimes the implementing partner makes the mistake of selling the SCCT, not as a strategic tool that can transform business functions but rather as another software that will improve business efficiencies. This will lead to wasted effort that implementation will get driven in completely wrong direction.
    • There are number of proven analytics tools and products that exist with the client. Integrating these existing tools/products in the SCCT roadmap, may cause issues during implementation but will help adoption.

    A typical issue of not successfully overcoming these challenges is that companies go down the path of SCCT implementation (visibility, predictive analytics, decision tools etc.) but end up implementing an end-to-end KPI dashboard. Though the dashboard may still bring in benefits, it causes disillusionment amongst the client project team in terms of SCCT capabilities.

    Some of the ways to mitigate these challenges and move towards a successful implementation include –

    • Treat SCCT implementation as a strategic initiative and not as an IT implementation. Hence, it is critical to have someone high in the business team (CSO level) bless the initiative.
    • When prioritizing sprints, give equal weights to simple but quick wins – this motivates the client’s project team.
    • Always assess the current tools and products in client environment, i.e., prioritize integration over innovation.
    • Continuous engagement with all functions (even if there is nothing happening in a specific function) is important and should be made into a practice.

    An Example of our Supply Chain Control Tower Implementation –

    Client is a global retail major and currently uses siloed systems to track supply chain KPIs and performance. The client wanted to ensure high visibility of status and performance metrics at each node within their supply chain network as a part of the Phase 1 of implementation. As a part of the Phase 2, the client wants to incorporate predictive tools, root-causing and scenario planning as a part of the Supply Chain control tower

    Tredence helped build the phase 1 of the control tower, we helped improve visibility and efficiencies at each node using interactive dashboards with holistic views, updated with live data for better planning and informed decision-making for the client.

    The post The Supply Chain AI Hype and the Importance of Supply Chain Control Tower in its Digitized Form appeared first on Tredence.

    ]]>
    http://www.zhazhai36.com/blog/the-supply-chain-ai-hype-and-the-importance-of-supply-chain-control-tower-in-its-digitized-form/feed/ 0
    Defining MTO and MTS Production Strategy and its Implementation http://www.zhazhai36.com/blog/defining-mto-and-mts-production-strategy-and-its-implementation/ http://www.zhazhai36.com/blog/defining-mto-and-mts-production-strategy-and-its-implementation/#respond Fri, 12 Jul 2019 06:50:16 +0000 http://www.zhazhai36.com/?p=6337 A reliable vendor management system must execute performance-oriented deliverables within a stipulated time frame without errors. High-tech makers of industrial machinery are, thus, in constant lookout for such ‘Make-To-Order’ process settings...

    The post Defining MTO and MTS Production Strategy and its Implementation appeared first on Tredence.

    ]]>
    Bhaskar Seetharam
    Bhaskar Seetharam
    Associate Principal – Supply chain

    One of the key metrices that almost all organizations work towards achieving is On-Time Performance, a measure of reliability of a process or an organization. To achieve a high on-time performance, the first and probably the most important step is to be able to give your customers the right delivery time-lines or delivery lead time. The right delivery lead time for product is defined either by the customers (in case where there are competitors/substitutes easily available) OR by the company based on internal constraints.

    High-tech makers of industrial machinery typically are ‘Made-To-Order’ processes wherein the machines are either manufactured or assembled against a firm order and the customer who gives the order is willing to wait for his delivery. At the other extreme, commodity goods suppliers (like cement, salt manufacturers) generally are “Make-to-Stock” & maintain inventories at multiple echelons in their network, to ensure they are ready to serve demand from their consumers.

    Unfortunately, most industrials and manufacturing based organizations tend to be a mix of MTO and MTS products. They constantly grapple with the problem of deciding which items to service from stock and which products to offer but produce only against orders. Should all fast movers be MTS and slow movers be MTO? Is MTS or MTO dependant on my competitors/substitutes? If an item once defined as MTS remain MTS forever?

    In the following section, we discuss the conceptual definition of MTO and MTS.

    Defining Make-To-Stock (MTS) and Made-To-Order (MTO)

    In its simplest form, there are two key factors that define whether an item or an order is MTS or MTO – Supply Lead Time (SLT) or the time it takes to produce/supply the item to a customer AND Customer expected Lead Time (CLT) or the time that a customer/consumer expects the item to be made available. An important point to note here is that both SLT and CLT are not specific to an ITEM but to an ORDER.

    Simply put? –

    Any order where the supply lead time (SLT) is less than the customer expectation lead time (CLT), the order can be referred to as “MTO”, otherwise the order is MTS

    • If SLT <= CLT then MTO
    • If SLT > CLT then MTS

    Thus, the first level of MTO/MTS definition is at an Order level.

    Issues with this Definition.

    An important part of the above discussion is an organizations ability to judge the CLT. Though CLT is fundamentally impacted by the way a product is consumed (e.g. grocery items are typically required immediately and consumed daily), it is also impacted by a) Availability b) Technology c) Pricing d) Competition/Substitutes e) Others. What this also means (and as stated earlier) is that for the same item, while some set of consumers are willing to wait (CLT is high => MTO), another set of consumers may want them immediately (MTS). E.g. a specific brand of tea powder may be MTS in a certain geographic area, but may be treated as an MTO for orders coming in from other geographies

    Some quick ways to assess whether a product is an MTS or MTO include

    • Are there competitors or like-for-like substitutes available for the product?
    • Does availability significantly impact the sales of the item?
    • What is the profile of the end consumer/customers? For e.g. the same product could be supplied to retail chain (MTS) and a commercial project (MTO).
    • Is our end consumer willing to quote a delivery date OR will they find an alternate source/product?

    Though the MTO/MTS definition of an order or an SKU is typically customer backwards, companies tend to follow simplified “thumb-rules”. For example, one of the companies that we worked for, that was into manufacturing bathroom accessories, had a simple rule that all items that are Fast moving will be MTS and the rest of the SKUs will be treated as MTO.

    This view of the industry (quite prevalent) does work to a large extent. It is a good enough approach that helps companies get it right on most occasions. But a few problems that this simple approach poses are –

    • A lot of manufacturing companies today serve a range of customers (retail, projects, distributors etc.); as discussed earlier, SKU can be MTS for a retail customer and MTO for a “projects” customer. In a manufacturing organization, with constrained capacity, to serve an MTO customer from stock (like MTS) is a crime as it is a wastage of limited capacity/resource.
    Retail Customers Distributors Project Customers
    Fast Moving SKUs MTS MTS MTO/MTS
    Slow Moving SKUs MTS/MTO MTO MTO

     

    • Abnormally large orders are served from inventory many times, depending on the customer type and ordering mechanism (as the item that has been classified as MTS). This leads to the item being unavailable for a large section of smaller customers, leading to urgency orders in manufacturing.
    • The MTS/MTO definitions, typically end up being static; items classified as MTO typically get de-prioritized (as there is no inventory) and the range is not upsold actively. This behavior over a period of time leads to a shrinking of the product range.

    Suggested Solution to MTO/MTS classification:

    With sufficient data available today across most organizations, we have built data-driven tools to classify SKUs and more importantly ORDERS into MTO and MTS

    Tredence MTO-MTS solution:

    Some of the key data points that we look at for MTO/MTS decision making are:

    • Expected Lead time by customer (CLT v SLT, Master data)
    • Sales Rate (Value, volume)
    • Sales frequency
    • Order volume distribution
    • Sales channel mapping (certain SKUs are offered in certain channels only)
    • Risk of Obsolescence (fashion?), damage etc.
    • Inventory Risk -Days of Cover (MOQ vs sales rate)
    • Cost of carrying inventory vs Margin (high value-low margin item?)
    • Order size (Elephant orders)

    This data driven model that can assist in making the MTO/MTS decision dynamically and provide a planner with answers to –

    • Inventory planner – “Do I keep SKU A in stock or Not?”
    • Fulfillment planner – “Do I treat this new order in the system as an MTO or an MTS from units available in the inventory?”
    • Production planner –
      • “How much production line capacity should I allot to MTS and MTO (for MTO date planning)?
      • How do I prioritize between MTO and MTS orders?”

    How does it Add Value to the Client?

    • Reduced inventory cost, capital cost (w.r.t damages, inventory obsolescence, etc.), cost of inventory, cost of damages/obsolescence. It also frees up WH space, optimizes working capital and thus, reduces the overall cost to company.
    • Better capacity utilization and reduced production losses – with lesser instances of procurement expediting expenses.
    • Improves Sales through improved stock/product availability.
    • Improved On-Time Delivery performance (right use of capacity and right expectation setting).

    All in all, the Customer Lead Time analysis and management can influence the MTO-MTS productivity. With right approach and assistance, not only can the CLT be met but working capital and inventory costs can be reduced.

     

    The post Defining MTO and MTS Production Strategy and its Implementation appeared first on Tredence.

    ]]>
    http://www.zhazhai36.com/blog/defining-mto-and-mts-production-strategy-and-its-implementation/feed/ 0
    Product Management in the data science world http://www.zhazhai36.com/blog/product-management-in-the-data-science-world/ http://www.zhazhai36.com/blog/product-management-in-the-data-science-world/#comments Fri, 28 Jun 2019 06:31:03 +0000 http://www.zhazhai36.com/?p=6179 The topic related to ‘Product Management’ has received quite a flake in recent years. Several rounds of discussions have happened to create an analogy out of client’s stand point.
    As I heard more of these conversations, there ...

    The post Product Management in the data science world appeared first on Tredence.

    ]]>
    Sagar Balan
    Sagar Balan
    Principal, Customer Success

    The topic related to ‘Product Management’ has received several laurels in recent years. Several rounds of discussions have happened to create an analogy out of client’s stand point.

    As I heard more of these conversations, there was an uncomfortable ambiguity stemming from disbelief – is this another fad or is there meaning to it? Well, the initial rumblings were from the cool kids in the bay. But, why did grounded Midwest and shoot-from-the-hip south latch on? Must be something deeper, right?!

    Product management has been around forever in the software, e-commerce world. But, today, mainstream IT and AI teams in fortune 500 companies are thinking of a product paradigm shift. Leading consulting firms are also developing products or beefing up their technology as an eventuality.

    But, the question that begs attention here is – why products? What happened to software as a service, platform as a service, ML as a service? Do we need another paradigm shift? Or as the saying goes – Old wine in a new bottle?

    IT teams are today being led by progressive Chief Digital Officers, Chief Data officers. Conventionally, CIOs have been leveraging their value by app dev teams, BI teams, infrastructure teams et al. While this may have become a table stake, it has been around for a while already. The question is – ‘How to deliver incremental value to business?’

    So, what has changed?

    Demand:

    IT is today called upon to be a true business partner. And, given the rate at which business is facing change, the time to deliver value is compressed.

    Glocal innovation:

    For a fortune 500 firm operating globally, innovation is striking at its core from multiple directions. While the USA is still the biggest revenue (EBITDA generation engine), problem and solution innovation is happening in other markets faster than the USA. For starters, they have less legacy to deal with. The local markets are facing competition from nimbler players. VC money is flowing into firms in China, Israel, Korea, India which are encountering newer problems in e-commerce, voice commerce sectors. Other traditional revenue generating markets, individually facing slower growth, find it difficult to make a business case to invest in solutions led by such innovations.

    Problem repeatability:

    This is going to sound rhetorical. But, I must state it because it is relevant. Business problems in today’s enterprise are constantly changing. Few of them get recreated, and hence are not available in large volumes. Few others are becoming common across markets and thus moving into a constant state of being a tightly defined problem that can be applied globally. Repeatable.

    A good indicator to this is AWS recent product launches – out of the box image, text, voice, reinforcement learning, forecasting. Common problems which are finding repeatable solutions.

    The AI candy shop:

    Today, nobody wants to use process automation tools that are not embedded in intelligence. Passé, inefficient. Wallstreet, investors and boards are lapping up the buzzwords – cognitive, AI, embedded ML.

    Cloud enabling global scalability:

    Cloud platforms such as Azure, AWS have ensured that once you have these AI capabilities developed, they can be deployed globally. The global-local adaptation is a key design criterion in this context.

    Glocal solution adaptation…er,… maybe Glocal problem adaptation:

    Each market has its secret sauce in terms of the market structure, drivers and customer nuances. Thus, before adapting a solution from one market to the other, it is essential to adapt the problem as well. For example, it is an interesting pursuit to adapt the problem structure from the modern trade Australia market to half way across the world in Brazil.

    And, then adapt the solution.

    So, who’s game is it anyway?

    Given the above guard rails, it is quite evident that the business case should be developed by a country specific P&L or ROI measure. It must be a global mandate. IT is one of the few functions which is ideally poised to ride this wave. That, they own the data systems is coincidental. Or, well.. was that the whole plan! Go, Trojan..

    Finally, after rambling about half the things in the world – we come to the initial topic of this article. Products. Why?

    A product has a life – it evolves constantly. The focus is on continually making the best product for its end user, ever possible. It has a roadmap. In a world of multiple users, it needs a strong owner who plans and decides well. It has a clear value proposition in each update/release. It can be developed in a sprint like manner. It can be defined with a bounded scope and sold internally in enterprises, with greater ease. And, be defined, abstracted, customized for a global roll out.

    Looks like a duck, walks like a duck, sounds like a… must be a duck. Yes, I guess it does look like a product.

    But, how do we help organize people and teams to get the products rolled out?

    While the below roles are common to a product-oriented firm, the thought process is different from conventional IT projects. Sharing of resources across projects being the biggest drawback. The smartest of each of the below folks will perhaps still fail, without an organizing framework. The roles to work in a closely integrated manner, dedicated to making a single product successful.

    Product Designer:

    The role of a product designer is someone who can completely immerse himself in the shoes of the end user, without worrying about the AI or Tech related issues that may occur sometimes. Just solve the end user’s real problem and keep tracking the end user’s behaviour as the product usage evolves. In product management, there is a contradictory school of thought which mandates that the designer must appreciate “how” a product works. This, however, might dilute the designer’s objective of empathizing with the end user.

    Product owner:

    A functional expert of impact analysis who can connect the dots and identify the nuances of each problem. A great problem solver, with functional expertise, has the knack to see through the commonalities, and the uncommon aspects too. Prioritization between the must-haves, nice-to-haves and must-not-haves is a key skill required in the role.

    Product BAs

    Products are quite massive in terms of their scope today. Primarily, each product usually is broken down into sub products which are owned by individual product Bas.

    The AI solution developer(s)

    Usually, it is very difficult to get a product owner who really gets AI solution development. By and large, individual intelligence is anyways overrated. It is important to have a dedicated AI solutioning team which can translate the problem into a modular AI solution.

    The AI deployment team

    It is not enough to develop a modular AI solution. To be able to deploy it in globally scalable platforms requires seasoned IT big data engineering & testing capabilities. The plumbing and wiring required to take the AI concept to enterprise last mile reality is no mean task. It is a specialized function. Truly speaking, they give the product its real-life form.

    Scrum & Program Managers

    Last but not the least, you need the scrum team and program managers. Everyone benefits from their discipline and order amidst the chaos.

    So, what kind of product management tools would you require to deal with the existing concerns within your organization?

    All said and done. Is it enough to stand up a product team and deliver the product? More to come in the next article – adoption ..

    The post Product Management in the data science world appeared first on Tredence.

    ]]>
    http://www.zhazhai36.com/blog/product-management-in-the-data-science-world/feed/ 1
    Bringing the promise of ML to your MDM : Part II http://www.zhazhai36.com/blog/bringing-the-promise-of-ml-to-your-mdm-part-ii/ http://www.zhazhai36.com/blog/bringing-the-promise-of-ml-to-your-mdm-part-ii/#respond Thu, 27 Jun 2019 09:44:38 +0000 http://www.zhazhai36.com/?p=6161 'Augmented data management' is a key trend where AI/ML is transforming how enterprises manage their data.
    In the last article, we looked at some of the key pain points that exist as IT and business leaders ...

    The post Bringing the promise of ML to your MDM : Part II appeared first on Tredence.

    ]]>

    ‘Augmented data management’ is a key trend where AI/ML is transforming how enterprises manage their data.

    In the last article, we looked at some of the key pain points that exist as IT and business leaders constantly grapple with the increasing influx of data sources, without systems to keep up.

    Let us look at what makes Sancus address these pain points.

    So what is Sancus?

    Sancus is a suite of AI/ML based data management tools that aims to deliver reliable data to your business.
    The image below illustrates the breadth of issues that typically exist, and the specific entities/use cases the solution addresses.

    Let’s look at why each of these use cases is crucial to tackle from a foundational perspective.

    • Data validation – Your business units constantly refer to master data as a ‘source of truth’; it could range from critical data such as customer shipping information, a lead’s contact email or employee phone number. Maintaining, updating and constantly checking data validity to ensure business sees the right information is a key determinant of the quality of downstream decisions made.
    • Data cleansing – Enterprises rarely have their master data in one source – standardized, cleansed and ready to go. The reality is that each data entity is sourced from traditional platforms (CRMs, ERPs), flat files, external sources, etc. and this often leads to redundant and duplicate information. Sancus leverages powerful machine learning models to identify similar entities, group them and assign a representative ‘golden record’ that helps the business identify and tie all relevant information to that unique record.
    • Data enrichment – Every firm is well on its way to using internal data for decision making; however, the wealth of information present outside your firewall could help provide key insights on multiple fronts. This is where firms are keen to compete and gain a competitive advantage. As an example –
      • What if you could tie each of your customers to their parent firms, and actually identify white space opportunities to grow your business?
      • What if you could validate and enrich your product attributes, while also analyzing relative assortment and competitive pricing trends on e-commerce sites?
    • Hierarchy management – Hierarchies are a tough nut to crack, as they often combine the problems of the above use cases, and add complexities of their own. However, a robust hierarchy mapping of your customers, contacts, products and materials can be invaluable in gaining a 360 view of your business and providing opportunities to grow revenue while controlling cost.
      • An interesting application of hierarchy management is product category standardization (in this case, to the GS1 standard), which we have implemented for a few large retailers in the EU. This migration helped our customers streamline product lines, optimize their supply chain and rationalize their supplier portfolio.
    • Data anomaly analysis – “What is the state of your data quality”, “What are your data quality challenges”?; These questions could either prompt a blank silence or lengthy answers without a clear direction. The reality is that data quality metrics in any firm is complex due to the multiple issues we have established. However, solving any of the use cases we have seen above without providing business users and IT teams with custom reporting and insights into their data health is only solving part of the problem. Sancus leverages ML driven anomaly detection tools to test variations in data, and also allows custom business rules to be defined; thereby leveraging clear data quality standards to be tested, in order to measure and improve data quality.

    How does Sancus work for you?

    • Data monitoring – Here is where we close the loop on the feature set. Sancus is built to integrate into your environment and run projects on an ongoing basis. As user feedback is provided, model accuracies increase, thereby improving automation and data quality KPIs. Sancus is configured based on use case(s), and comes with a managed services team that works to scope out client specific requests and enhancements that are to be built as part of the project.
      • The advantage with Sancus is its modular plug-and-play model, where different use cases use single or multiple components of the solution.
      • For example, a use case to cleanse and master customer data from Salesforce would have a workflow quite different from a use case to validate customer addresses and build customer hierarchies.
      • In other words, the solution is priced and deployed as per customer needs and integrated with the systems/processes they use currently.

    Sancus connects to a range of input systems, ingests data through a “data discovery” layer, where data is unified and standardized, and then processes data based on the configuration defined.

    The solution is cloud compatible as well as on-prem friendly and presents multiple options for integration.

    Sancus has seen a number of successful implementations with varying scale, right from a simple customer validation and de-duplication exercise all the way to replacing an enterprise MDM platform for address validation.

    In the last part of this series, we will explore some of the different flavors of the implementations done through Sancus.

    Learn more about Sancus, and reach out to our team for a free demo – www.zhazhai36.com/sancus/

    The post Bringing the promise of ML to your MDM : Part II appeared first on Tredence.

    ]]>
    http://www.zhazhai36.com/blog/bringing-the-promise-of-ml-to-your-mdm-part-ii/feed/ 0
    The New Age of Customer Satisfaction http://www.zhazhai36.com/blog/the-new-age-of-customer-satisfaction/ Mon, 17 Jun 2019 10:45:41 +0000 http://www.zhazhai36.com/?p=6107 Let us start with an oft repeated question,” What do you know about your customer’s preferences”?
    The answer could be any of the standard responses which talk about their tastes in your merchandise based on past transactional records...

    The post The New Age of Customer Satisfaction appeared first on Tredence.

    ]]>
    Crishna Carthic
    Crishna Carthic
    Senior Manager

    Let us start with an oft repeated question,” What do you know about your customer’s preferences”?

    The answer could be any of the standard responses which talk about their tastes in your merchandise based on past transactional records. It could be also one of the slightly more personalised answers which talk about the customer’s likes and dislikes basis whatever they have filled in their surveys and feedback forms. Does this tell you all you need to know about your customers? Does this help you make the customer experience of that customer something which he/she will remember? Something that gets ingrained into the sub-conscious decision-making component of their minds. That is the holy grail which most CX organisations are after.

    Where does data come into the picture?

    With 91 properties around the world, in a wide variety of locations, the Ritz-Carlton has a particularly strong need to ensure their best practices are spread companywide. If, for example, an employee from their Stockholm hotel comes up with a more effective way to manage front desk staffing for busiest check-in times, it only makes sense to consider that approach when the same challenge comes up at a hotel in Tokyo. This is where the hotel group’s innovation database comes in. The Ritz-Carlton’s employees must use this system to share tried and tested ideas that improve customer experience. Properties can submit ideas and implement suggestions from other locations facing similar challenges. The database currently includes over 1,000 innovative practices, each of them tested on a property before contributing to the system. Ritz-Carlton is widely considered to be a global leader in CX practises and companies like Apple have designed their CX philosophy after studying how Ritz Carlton operate.

    What does this tell you- Use your Data wisely!

    The next question that may pop up is, “but there is so much data. It is like noise”. This is where programmatic approaches to analysing data pop up. Analytics and data sciences firms across the globe have refined the art of deriving insights out of seemingly unconnected data to a nicety. What you can get out of this is in addition to analysing customer footprint in your business place, you get to analyse the customer footprint across various other channels and social media platforms.

    As a sample, investigate this infographic created for one of our customers:

    The New Age of Customer Satisfaction
    This aims to profile the customers who are most susceptible to local deals/rewards/coupons basis their buying patterns.

    How is this done? The answer is rather simple. Customer segmentation algorithms (both supervised and unsupervised) enable you to piece together random pieces of information about the customer and analyse the effect they have on a target event. You will be surprised at the insights that get thrown out of this exercise. Obviously caution needs to be exercised to ensure that the marketeer doesn’t get carried away by random events which are purely driven by chance.

    Okay- so I have made some sense out of my data. But this is a rather cumbersome process which does not make any difference to the way I deal with my customer on a day-to-day basis.

    “How do I get this information on a real-time basis so that I can actually make some decisions to improve my customer’s experience as and when it is applicable?”

    This takes into the newest and most relevant trend into making data sciences a mainstream part of decision making. How do we integrate this insight deriving platform into the client’s CRM system so that the client can make efficient decisions on a real time basis?

    In Tredence, for one of our leading technology clients, we have built an AI-based orchestration platform which derives the actionable insights from past customer data and integrates this into the customer’s CRM system so this becomes readily available to all marketeers as and when they attempt to send out a communication to their customers.

    What does this entail? This entails using the right technology stack to build a system which can delver insights from the data science modules at scale. I prefer calling it out as a synergy of both data sciences and software development. Every decision that a marketeer is trying to make must be processed through a system which will invoke the DS algorithms in-built on a real time through the relevant cloud computing platforms. Insights will be delivered immediately, and suitable recommendations will also be made on a real-time basis.

    This is the final step in ensuring that personalised recommendations being made to every customer are truly personalised. We in Tredence call it “The Last Mile adoption”. This development is still in its nascent phase. However, companies would be wise to integrate this methodology as a part of their data science integrated decision making since it is very unlikely that they will hit the holy grail of customer satisfaction without delivering real-time personalised recommendations.

    The post The New Age of Customer Satisfaction appeared first on Tredence.

    ]]>
    Applications of AI in Document Management http://www.zhazhai36.com/blog/applications-of-ai-in-document-management/ http://www.zhazhai36.com/blog/applications-of-ai-in-document-management/#respond Wed, 12 Jun 2019 07:14:29 +0000 http://www.zhazhai36.com/?p=6037 “We are drowning in information, but starved for knowledge”
    This is a famous quote by John Naisbitt which shows the key difference between information and knowledge.

    The post Applications of AI in Document Management appeared first on Tredence.

    ]]>
    Pavan Nanjundaiah
    Pavan Nanjundaiah
    Head of Solutions

    We are drowning in information, but starved for knowledge

    This is a famous quote by John Naisbitt which shows the key difference between information and knowledge. Advancement in data engineering techniques and cloud computing have made it easy to generate data from multiple sources but making sense of this data and getting insights is still a huge challenge. The data volumes have now increased exponentially and along with the traditional structured data, data can now reside in different formats like unstructured social media text, log files, audio/video files, streaming sensor data etc.

    Applying manual methods to process this diverse data is not only time consuming and expensive but is also prone to errors. Hence the need of the hour is to use Artificial Intelligence (AI) based automated solutions that can deliver reliable insights and also give a competitive advantage to customers. Here are few examples of how customers across industries can benefit from AI driven solutions.

    Microsoft Azure based AI solution

    In 2017, more than 34,000 documents related to John F Kennedy’s assassination were released. The data volume was huge, and data existed in different formats like reference documents, scanned PDF files, hand written notes and images. It would take researchers months to read through this information and hence manually reviewing this data was not the most optimal solution. Microsoft Azure team applied AI based Cognitive Search solution to extract data from these diverse sources and gained insights. Technical architecture for this use case was built using Azure Cognitive Services components like Computer Vision, Face Detection, OCR, Handwriting Recognition, Search and core Azure components like Blob Storage, Azure ML, Azure Functions and Cosmos Database. This solution also annotated text using custom CIA Cryptonyms.

    Hospitals usually deal with a lot of patient data which could reside in electronic medical records (EMR), handwritten prescriptions, diagnostic reports and scanned images. AI based Azure Cognitive Search could be an ideal solution to efficiently manage patient’s medical records and create personalized treatment plan. Many downstream use cases like Digital Consultations, Virtual Nurses and Precision Medication can be built once the patient data is optimally stored.

    Google Cloud Platform (GCP) based AI solution

    GCP introduced Document Understanding AI (beta) in Cloud Next 19. This is a serverless platform that can automate document processing workflows by processing data stored in different formats and building relationships between them. This solution uses GCP’s vision API, AutoML, machine learning based classification, OCR to process image data and custom knowledge graph to store and visualize the results. Customers can easily integrate this solution with downstream applications like chatbot, voice assistants and traditional BI to better understand their data.

    Customers who deal with Contract Management data like Mortgages are usually faced with a lot of manual tasks to ensure that the contracts are complete and accurate. This could mean processing contracts in different formats/languages, reviewing the supporting documents, ensuring that the details are accurate and complies with regulatory standards across documents. By using Document Understanding AI and integrating it with a well-designed RPA framework, customers will be able to efficiently process Mortgage applications, Contracts, Invoices/Receipts, Claims, Underwriting and Credit Reports.

    Use cases from other industries

    Document Management AI solution can also be applied to diverse use cases from other industries like processing claims related to damages to shipped products by e-commerce companies, handling know your customer (KYC) process in the banking industry, invoice data processing by Finance teams, fraud detection during document processing etc.

    As more and more companies embrace the digitization wave, they will be faced with different variations of data/document management challenges. Based on the current trend, number of use cases are only going to increase and an AI driven solution is probably the most efficient way to solve this problem as it can reduce manual work, save cost and deliver reliable insights. This will ensure that companies can spend more time on building their business and less time on manually processing documents and data preparation.

    Going back to John Naisbitt’s quote, AI and ML driven solutions are probably the only way to bridge the gap between information and knowledge.

    The post Applications of AI in Document Management appeared first on Tredence.

    ]]>
    http://www.zhazhai36.com/blog/applications-of-ai-in-document-management/feed/ 0
    Bringing the promise of ML to your MDM: Part 1 http://www.zhazhai36.com/blog/bringing-the-promise-of-ml-to-your-mdm-part-1/ http://www.zhazhai36.com/blog/bringing-the-promise-of-ml-to-your-mdm-part-1/#respond Thu, 06 Jun 2019 07:42:45 +0000 http://www.zhazhai36.com/?p=5901 Enterprises are rushing to transform themselves, and embrace the promise of digital transformation; and while the means to achieve this end are disputed, there is unanimous agreement on the fact that reliable data is the starting point.

    The post Bringing the promise of ML to your MDM: Part 1 appeared first on Tredence.

    ]]>

    Enterprises are rushing to transform themselves, and embrace the promise of digital transformation; and while the means to achieve this end are disputed, there is unanimous agreement on the fact that reliable data is the starting point.

    Tools to facilitate your decision supply chain, starting from vanilla BI reporting to the most complex AI/ML predictive algorithms, are only as good as the data you start with.

    So, how are enterprises trying to achieve “reliable data” today?

    There are, of course, a number of solutions – ranging from investing in traditional MDM platforms to data aggregators/enrichment providers, and even emerging ML-based cleansing tools.

    However, if you are a decision maker tasked with ensuring the availability of clean and reliable data for your business, you live in a world of challenges; let me try and get you nodding about some of them.

    1. The curse of legacy systems:

    The data value chain from capture -> ingestion -> storage -> management -> analytics & insight is rapidly maturing, and this means that enterprises are often stuck with legacy systems which were set up as siloed, decentralized sources; these systems rely heavily on manual inputs & checks from IT teams, and fail to keep up with increasing data sources that business wants to analyze and understand.

    As a result, you are constantly dealing with new complexities in a world where current systems are already not at their best.

    1. The promise of the cloud:

    Cloud migration offers cost-effectiveness, greater agility, increased feature sets and eventually present greater possibilities with your data; however, besides being a strategic decision, this state poses additional complexities in how you choose your tech stack and lay out a roadmap that ensures your data management challenges are addressed while dealing with the cloud.

    1. The need to solve effectively, with agility:

    Business leaders generally need to build a business case before investing in any solution; and rather than have a one-size-fits-all, they typically want to solve high priority use cases as part of a larger roadmap. The need is a solution that is agile enough to be customizable and rapidly deployed to a use case.

    • So, how do I prioritize my challenges and demonstrate value on key business initiatives with a quick turnaround?
    • Secondly, how can machine learning and AI help my data get more accurate and reduce manual efforts?
    1. The fear of poor RoI:

    Established enterprise vendors in the market typically sell software with multiyear agreements, long implementation cycles, require expensive licensing & specialized stewardship. Not to mention, the data still requires manual preparation, cleansing & quality checks before the first drop of insight falls from the tap.

    • The key question here is how do I deal with obvious questions of RoI, accuracy improvements, ongoing maintenance to ensure the business continues to get quality data as needs evolve?

    With this world of challenges, what are some must-haves from a potential solution?

    • A solution that integrates with a variety of legacy sources, with capabilities to unify any flavor of master data (include customer, contact, vendor, product, material, etc.)
    • A solution that presents multiple options for integration and usage, with cloud, on-prem and hybrid configurations
    • A solution with pre-built ML modules and training sets that can deliver rapid proof of concepts that address pressing business needs, while also improving as business users provide feedback
    • A solution that is white-box and comes with managed service offerings that can address ongoing needs of enhancements

    Note that regardless of your current data maturity, these are some of the core pillars & use cases that will need to be solved.

    And that is why, we built Sancus, an AI/ML based data management suite to deliver reliable data to your business.

    Sancus is an ‘augmented data management’ solution, which Gartner has called out as a key data & analytics technology trend for 2019.

    Learn more about Sancus
    In the next part of this series, we will explore Sancus in more detail – features, architecture, differentiators and some key implementations where we have driven success for our clients.

    The post Bringing the promise of ML to your MDM: Part 1 appeared first on Tredence.

    ]]>
    http://www.zhazhai36.com/blog/bringing-the-promise-of-ml-to-your-mdm-part-1/feed/ 0
    Digital Transformation / Industry 4.0 http://www.zhazhai36.com/blog/digital-transformation-industry-4-0/ http://www.zhazhai36.com/blog/digital-transformation-industry-4-0/#comments Fri, 17 May 2019 06:27:09 +0000 http://www.zhazhai36.com/?p=5691 Digital Transformation / Industry 4.0 is on everyone’s mind. Investors are happy to hear from organizations that they are embarking upon a complete Digital Transformation journey. Investors love it, leaders advocate for it, directors have to make it a reality, managers have to design for it, but few understand what all it means in the grand scheme.

    The post Digital Transformation / Industry 4.0 appeared first on Tredence.

    ]]>

    Digital Transformation / Industry 4.0 is on everyone’s mind. Investors are happy to hear from organizations that they are embarking upon a complete Digital Transformation journey. Investors love it, leaders advocate for it, directors have to make it a reality, managers have to design for it, but few understand what all it means in the grand scheme.

    Hopefully, we can simplify this world for you.

    What is Digital Transformation? Let’s keep it simple.

    The simplest way to describe Digital Transformation is “Using Digital technology, innovation and intelligence to find better ways to do various things that organizations do today. It’s not about creating something new, but more about improving effectiveness and efficiency of existing processes for better business outcomes.”

    Digital Transformation started as Industry 4.0 in some places. However, the idea remains the same. While Industry 4.0 started with the intention of transforming the manufacturing processes using Digital Technology, the principles of Digital Transformation now apply to all functions across the organization.

    How does this theory apply in practice? Let’s study an example:

    Step 1 – Current State

    Map out the current process to uncover gaps that can be filled with better technology or intelligence.

    Consider a global paper products manufacturing company. The manufacturing team is constantly trying to find opportunities to improve efficiency and productivity and reduce costs.

    1. Energy consumption is a big area of focus for the manufacturing team. Currently, manufacturing reports and energy dashboards are used to track the consumption of energy across a few important machine parts.
    2. Operators use these dashboards to identify sections of machines that are in green/red (good/bad) zones in terms of energy consumption and adjust the settings to optimize energy consumption.
    3. These dashboards only track a limited set of machine parts that influence energy consumption.

    Step 2 – Future State

    Outline what the future should look like, after Digital Transformation.

    Energy consumption of machines at the mill (specific reference to Tissue Machines) can be reduced by finding the key driving factors of energy consumption, determining their optimal settings while factoring in for the production constraints in terms of time, quantity and quality.

    The following challenges will have to be addressed to get to the future state

    1. There are a few hundred variables in a tissue machine that determine the energy consumption. These machine variables have to be studied comprehensively to identify the key influential factors for energy consumption. Relationships between these variables also need to be considered.
    2. A detailed and statistically robust mechanism is created to generate insights/correlations across all relevant machine variables, to take proactive steps to minimize energy consumption.
    3. Study the process characteristics that influence energy consumption and optimize them. E.g. machine speed, maintenance schedule, aging of parts.

    Step 3 – Establish how technology, data and analytics can bridge this gap.

    The best way Digital Transformation approach for this example would be:

    1. Select a machine, in a market, which can be managed and monitored easily. Maturity in terms of capturing data, and the groundwork that has already been achieved for manufacturing systems and lean energy dashboards provides an immediate feasibility in terms of execution and adoption.
    2. Build a Driver Model to understand key influential variables and determine the energy consumption profile.
      1. Identify Key Variables –
        1. There are ~ 600 machine parts that drive the consumption of energy of a tissue machine. First, shortlist the top contenders and eliminate the non-influencer variables, using inputs from technical teams and plant operators.
        2. Identify primary drivers among the selected machine variables using variable reduction techniques of Machine Learning.
      2. Driver Model –
        1. Building multivariate regression models to understand the impact of top drivers of energy consumption using techniques like linear regression, RIDGE/LASSO regression, Elastic Nets.
      3. Optimize the engine to lower energy consumption.
        1. Optimize energy consumption by identifying the right combination of drivers under the given production constraints – time, quantity and quality.
        2. Create a mechanism to provide guidance during the actual production hours (In-line monitoring).
          1. Track energy consumption of the machine parts and their active energy consumption states. Identify deviation from the standards.
          2. In case of deviation, provide guidance to machine operators to bring the energy consumption to within defined limits.
      4. Adoption
        1. Real-time dashboards, refreshed weekly, provide charts on energy consumption, recommendations, and improvements achieved through proactive measures.
        2. Post-live support to operations teams to enable adoption.
      5. Scaling
        Determine phased roll-out to other machines using

        1. Strategic initiatives.
        2. Machines or mills which utilize higher amounts of energy to target higher ROI.
        3. Similarity in process and parts characteristics of tissue machines.
        4. Data availability and Quality.
        5. Readiness and groundwork for adoption by plant operators and energy management teams.

    4 key stages in Digital Transformation

    How should you, as a leader in an organization, look at Digital Transformation? Organizations should consider the 4 key stages of Digital Transformation, in order to create a sustainable impact on their organization. To make Digital Transformation a reality, all these steps cannot work independently. The philosophies of Design Thinking are embedded in the framework’s interconnected elements.

    DEVELOPMENT PHASE:

    Focus is on identifying the key areas and prioritizing the Digital Transformation efforts

    Stage 1 – Discovery

    Identify the key areas of opportunity or risk and related key stakeholders. Detail out the gaps in process, data, insights or technology, fixing which would help capture opportunities or mitigate risks.

    Stage 2 – Design

    Rapid iterations on design and implementation of prototypes helps reach optimal solutions faster. Build out Proofs of Concepts (PoC) to establish the theoretical validity of the approach. Validate the practical validity of the approach through a Proof of Value (PoV).

    IMPLEMENTATION PHASE:

    Implementation needs to account for limitations arising from human behaviour and scale of the operations.

    Stage 3 – Adoption

    Building solutions that keep the user at the center of the design, is key to adoption. This means that users must be included in the design and feedback early on. In addition, there should be support for users post design, in the form of FAQs, training videos, chatbots etc.

    Stage 4 – Scalability

    If we can’t solve a problem at scale, then the solution does not solve organizational problems. The issues that we anticipate at scale, should be accounted into the design in the Development phase. This means considering the technology used, the infrastructure required, process automation possible / required and how to manage future developments.

    Digital Transformation / Industry 4.0 is on everyone’s mind. Investors are happy to hear from organizations that they are embarking upon a complete Digital Transformation journey. Investors love it, leaders advocate for it, directors have to make it a reality, managers have to design for it, but few understand what all it means in the grand scheme.

    Like Design Thinking would dictate, the Development phase of the Digital Transformation processes have to always consider the Implementation aspects.

    Digital Transformation is no longer just optional.

    Every organization is transforming the way they do business. Numerous organizations like BASF, Mondelez, KLM airlines, Aptar group, PepsiCo etc. are already making massive strides in this area.

    If you want to zip past your competition, or even stay competitive, it’s about time you started thinking about how to transform the way to do business. After all, there’s no growth in comfort.

    The post Digital Transformation / Industry 4.0 appeared first on Tredence.

    ]]>
    http://www.zhazhai36.com/blog/digital-transformation-industry-4-0/feed/ 1
    Supervised stack ensemble with natural language features: Driving Customer Service Optimization http://www.zhazhai36.com/blog/supervised-stack-ensemble-with-natural-language-features-driving-customer-service-optimization/ http://www.zhazhai36.com/blog/supervised-stack-ensemble-with-natural-language-features-driving-customer-service-optimization/#comments Wed, 29 Nov 2017 12:25:48 +0000 http://www.zhazhai36.com/?p=1408 In the age of social media, companies are conscious about the reviews that are posted online. Any act of dissatisfaction can be meted out by way of tart sentiments on these platforms. And so enterprises strive hard to give 100% positive experience, by doing all that they can to address customer...

    The post Supervised stack ensemble with natural language features: Driving Customer Service Optimization appeared first on Tredence.

    ]]>

    Saurabh Vikash Singh
    Saurabh Vikash Singh
    Manager, Tredence


    In the age of social media, companies are conscious about the reviews that are posted online. Any act of dissatisfaction can be meted out by way of tart sentiments on these platforms. And so enterprises strive hard to give 100% positive experience, by doing all that they can to address customer grievances and queries. But like they say, there are slips between the cup and the lip – not all grievances can be handled amicably.

    Let’s take the specific case of call centers here. Their Service Level Agreement mentions terms like number of calls answered at a certain time of the day, percentage of calls answered within a specific waiting time, etc. Ensuring customer satisfaction and retention requires a far deeper, more holistic view of interaction between customer care representative (agent) and caller. There are other KPIs such as what causes a customer to be dissatisfied and number of escalations. But these seldom find a place in the SLA.

    In this article, we will talk about identifying drivers of (dis)satisfaction and come up with ways to improve it. In the course, we will touch up on the solution design that can scale and institutionalize real-time decision making.

    Introduction

    We’ve all done it, dialing the call center for any issue encountered. We are surely an expressive bunch when it comes down to rattling our emotions and spitting out our dissatisfaction. And if that is not enough, we threaten to let our dissatisfaction be known to the rest of the world – through social media, not to mention #CustomerExperience.

    While standard surveys exist to capture the sentiments of customers, the percentage of people filling these surveys is very low. This compounds the problem of effectively addressing customer needs.

    Automating the task of predicting customer satisfaction requires a balanced mixture of text mining, audio mining, and machine learning. The resulting solution needs to:

    • Scale and be deployable
    • Identify the drivers of dissatisfaction
    • Generate actionable insights and generalize well to the population

    Modeling Pipeline

    Modeling pipeline includes all the components (data ingestors, model builder, model scorer) that are involved in model building and prediction. It is mandatory for the modeling pipeline to seamlessly integrate all the components for it to be scalable and deployable – production worthy. These components vary depending on the problem, available architecture, tools used, scale of the solution and turnaround time. The following pipeline was built in Google cloud to solve the problem of dissatisfaction in call centers.

    Modeling (actual work – driver identification)

    In the above problem, the satisfaction survey showed good internal consistency. Calls, emails and chats had sufficient discriminatory power to model customer satisfaction. Exploration of the data showed that the patterns were non-linear. However, like other psychometric models, the satisfaction model was plagued by three major issues which threatened its external consistency: shortage of data, variance and instability. These problems were addressed in the following manner:

    First, the issue of data shortage was solved using resampling (bootstrapping). Second, the challenge of model instability was resolved using k-fold cross validation for tuning hyperparameters of different models. This was followed by model averaging. Finally, the issue of model variance was solved using stack ensemble approach on bootstrap samples. Several classification algorithms were used to build the first layer of the stack. Logistic regression was used to predict the outcome by combining the results from the first layer. The accuracy thus obtained was superior to that of any individual model in the first layer of the stack.

    Driver Analysis

    Only two types of classification models are directly interpretable: logistic regression and decision tree. Interpretation of other Machine Learning techniques such as regularized regression and regression splines require knowledge of calculus, geometry and optimization. Machine Learning models such as support vector machine and neural networks are considered black box techniques because of the high dimensionality, which is difficult for the human brain to comprehend.

    Standard measures of variable importance exist for commonly used black box techniques such as SVM and neural networks. Simple weighted average method is used to calculate the importance of variables in the stack ensemble, with the weights being determined by the logistic layer. However, it is important to note that the final importance is not a measure of linear dependence of satisfaction on the independent variables. The importance metrics need to be combined with business intuition and actionability to provide recommendations for improving customer satisfaction.

    Consumption

    A call center manager would like to track customer satisfaction level along with several KPIs that are critical to operation. Information related to utilization of customer care representatives is provided to the manager in real-time. Model prediction is run in semi-real-time to reduce the required computational power. The manager is provided with options to deep dive into historical data based on variables that are drivers of dissatisfaction. For example, calls can be redirected to customer care representatives by existing ERP systems based on their history and subject matter expertise. This reduces the number of escalations and enables near real-time actionability without significantly affecting other KPIs.

    The problem of customer dissatisfaction in call centers can be solved using audio mining, text mining and machine learning. Intelligent systems greatly reduce the stress on customer care representatives by automating majority of the processes. These cloud-based systems can be seamlessly integrated with existing ERP systems to provide highly actionable insights about dissatisfaction without significantly affecting other critical KPIs that are related to call center operations.

    The post Supervised stack ensemble with natural language features: Driving Customer Service Optimization appeared first on Tredence.

    ]]>
    http://www.zhazhai36.com/blog/supervised-stack-ensemble-with-natural-language-features-driving-customer-service-optimization/feed/ 1
    Just ask Alexa, The machine in the corner room http://www.zhazhai36.com/blog/just-ask-alexa-the-machine-in-the-corner-room/ http://www.zhazhai36.com/blog/just-ask-alexa-the-machine-in-the-corner-room/#comments Fri, 06 Oct 2017 11:10:59 +0000 http://www.zhazhai36.com/?p=1279 AGCS (Alexa, Cortana, Google, Siri), as I fondly call these services, certainly have taken over my life. I talk to them every day. I take their help in research. I tell them to remind me of important tasks. I even ask them turn on / off different appliances at home. And they never complain! Now who yells at me or more...

    The post Just ask Alexa, The machine in the corner room appeared first on Tredence.

    ]]>

    AGCS (Alexa, Cortana, Google, Siri), as I fondly call these services, certainly have taken over my life. I talk to them every day. I take their help in research. I tell them to remind me of important tasks. I even ask them turn on / off different appliances at home. And they never complain! Now who yells at me or more importantly who do I yell at?

    The last few years have been a point of inflection in the area of personal assistants or PIP (Personal Informational Programs). They have gained a voice of their own – to say the least. Voice enabled assistants or voice assists are an evolution in human-machine interactions. When I say, “I speak with Alexa,” people are no longer surprised. They are just confused – am I referring to Alexa service or to a real person! Now that’s what I call the first step in machine takeover – the blurring!

    Some serious business:

    At Tredence, we have been experimenting with Alexa for a couple of months now. What started out as an exploratory process (Who and What is Alexa and How can I have a communication with her) has led to a more objective driven program. We like to call this Voice Enabled Insights (VEI).

    By integrating Alexa with Tableau, we have managed to provide a short synthesis of how the business is performing. And the best part, the insights are refreshed every morning. Operational managers can now have free-wheeling conversation with this voice enabled feature, enhanced with tableau. What a way to consume your morning coffee insights! The icing on the cake is that our system also crawls the web to provide competitor information so you cover the complete landscape. And the, if you want to discuss, you can ask humble Alexa to schedule a meeting with your required stakeholders (let’s say territory manager) through O365 integrations.

    So far, we have taken a small step towards a future that is closely integrated, connected and alive all the time – thanks to voice enablement. Looking into the future, imagine a situation where a patient’s family speaks to the panel and ask for the patient’s condition! They are received with prompt information on the room, current health parameters and in-operation status, if the patient is being recorded live. No more long waiting time and anxiety attacks at the help desk.

    How about doctors? They can optimize their time by getting critical patients conditions and issuing necessary instructions to nurses, in near real time. The same goes for any enterprise where there is a lot of personal interactions between service provider and consumer.

    Now that we covered the most important aspect from personal standpoint – ‘Health’, let’s move to industrialization and the phenomenon of IoT. There have been rapid advancements in the areas of machine to machine communication and the so-called intelligent machines. Add a larynx (voice-enabled feature) to this combination and I can simply step up to a panel and enquire: what has been the output so far, are there any issues with the systems, and issue commands to reroute if there is a line fault. All of this without even lifting a finger, literally “speaking”!

    In most cases, what we discussed is the benefit of the voice-enabled feature in a B2B or B2C scenarios. But this is not just it. The corner room assistant can help provide on-demand and interactive directory services, serve as a knowledge bank, and project manage. She can facilitate efficiency, timely decisions, and can also gamify training using skill and stories based mode for self-learning. Simply put, all we need to be is creative; the tools are already getting quite smart to say the least.

    It is a given today that Alexa and other services are changing the world and how we interact with it. With time to act constantly getting shorter, these disruptive innovations will play a greater role in how connected we are. Voice enabled insights, while not new in concept (remember IVR’s), is beginning to gain popularity owing the rapid propagation of machine learning and artificial intelligence. They are simply becoming more human in their interactions. It would be wise to get on the race sooner. But here’s the deal, start out on the journey in incremental ways and then scale. Soon there will be a time where we will adjectivize and say, ‘Just Ask Alexa!’

    The post Just ask Alexa, The machine in the corner room appeared first on Tredence.

    ]]>
    http://www.zhazhai36.com/blog/just-ask-alexa-the-machine-in-the-corner-room/feed/ 1
    Second spin: Driving efficiency and happiness through PMO http://www.zhazhai36.com/blog/second-spin-driving-efficiency-and-happiness-through-pmo/ http://www.zhazhai36.com/blog/second-spin-driving-efficiency-and-happiness-through-pmo/#respond Tue, 19 Sep 2017 07:36:12 +0000 http://www.zhazhai36.com/?p=1256 In my previous blog, we looked at how the Project Management Organization (PMO) at Tredence enables excellence across projects. In gist, traditional PMOs focus on ensuring projects are completed on schedule, and processes are followed the right way. At Tredence, the PMO group allows improved...

    The post Second spin: Driving efficiency and happiness through PMO appeared first on Tredence.

    ]]>
    Sanat Pai Raikar
    Sanat Pai Raikar
    Senior Manager, Tredence

    In my previous blog, we looked at how the Project Management Organization (PMO) at Tredence enables excellence across projects. In gist, traditional PMOs focus on ensuring projects are completed on schedule, and processes are followed the right way. At Tredence, the PMO group allows improved project planning, monitoring and control.

    In this blog, we will look at how PMO at Tredence drives efficiency on a day-to-day basis, which in turn drives improved work-life balance for employees, as well as improved quality and satisfaction for our clients.

    Fostering an efficiency based mindset is key – constant improvement manifests itself not just in improved quality, but better job satisfaction as well

    Stuck in a rut

    Analytics services teams typically follow two modes of operation – medium-term to long-term projects to solve specific business problems, and continued engagements to answer quick turnaround requests from clients. The latter typically involve same day deliverables, which lead to a constant time crunch situation for teams. Teams working on such projects have to, in a way, complete a mini analytics project within a day. This leads to immense pressure in planning one’s day and completing all tasks as per client needs. As time passes, employees in such teams face a burnout as they work day in and day out on similar tasks. Besides, a tendency to be able to do the job eyes shut also creeps in, leaving no room for innovation in the interest of urgent deliverables.

    Tracking without tracking

    As soon as a process or standard method of doing a set of tasks is introduced, it is immediately countered with resistance from employees, who are used to working without processes. So, if I compelled all employees to, say, track their time on an hourly basis and penalize them for all slips from the plan, I can guarantee that no one will follow it; even if they do, it will be with utmost reluctance and copious stress to themselves.

    Alternately, imagine I set a guideline to the tune of “We will all endeavor to leave by 7 PM every day.” No pressure here! But if an employee is not consciously trying to improve, and then observes most of his colleagues leaving before 7(PM), chances are he will start thinking about following the “best practice” himself. This is a passive way of fostering efficiency and change management.

    One can define a hundred processes in the interest of efficiency improvement, but unless individual employees buy in to the concept, it will all fail

    Passive is not enough

    Of course, it will not do to expect things to improve of their own accord. The above strategy can at best lead to incremental improvements, and at worst not help matters at all. PMO needs to actively foster a culture of continuous improvement. At Tredence, we have worked closely with delivery teams to help them identify the sources of inefficiency. These could be external causes, such as latencies linked with client based infrastructure, or traffic woes at rush hour. Causes could be internal as well, such as promising more than we could deliver, or going about work in a non-optimal manner. By quantizing the time lost due to each of these causes, we have directly addressed the reasons for inefficiency, fixed them to the extent possible, and created time for employees.

    Out of the rut

    Once employees realize that the organization is bought into the concept of helping them gain more time out of a day, they buy into the initiatives as well. The value they see coming out of such initiatives justifies the time they spend on providing data / reports for further improvement. As this percolates across levels, employees feel empowered to innovate themselves and the work they do on a daily basis, continuously making themselves as well as their colleagues better.

    At Tredence, we have enabled multiple teams to identify causes of inefficiency and act on these with improvement goals in mind. The time saved has enabled employees to invest not just in providing more value-added services to our clients, but also to themselves – utilizing the time for learning new skills, improving themselves and getting better at what they do.

    How does the PMO team in your organization go beyond just process excellence? Share your thoughts and best practices with us in the comments section.

    The post Second spin: Driving efficiency and happiness through PMO appeared first on Tredence.

    ]]>
    http://www.zhazhai36.com/blog/second-spin-driving-efficiency-and-happiness-through-pmo/feed/ 0
    A new spin to PMO: Driving excellence in a complex business environment http://www.zhazhai36.com/blog/a-new-spin-to-pmo-driving-excellence-in-a-complex-business-environment/ http://www.zhazhai36.com/blog/a-new-spin-to-pmo-driving-excellence-in-a-complex-business-environment/#respond Fri, 28 Jul 2017 13:52:55 +0000 http://www.zhazhai36.com/?p=1104 Go to any of the myriad analytics services providers that proliferate the industry today, walk up to any manager, and ask him if any of the analytics projects he works on is similar to the other. Chances are extremely remote that you will receive a response in the affirmative...

    The post A new spin to PMO: Driving excellence in a complex business environment appeared first on Tredence.

    ]]>
    Sanat Pai Raikar
    Sanat Pai Raikar
    Senior Manager, Tredence

    Go to any of the myriad analytics services providers that proliferate the industry today, walk up to any manager, and ask him if any of the analytics projects he works on is similar to the other. Chances are extremely remote that you will receive a response in the affirmative.

    Let’s go one step further. Ask the manager how easy it is to hire people with the right skills for different projects, ensure they learn on the job, while being efficient all through. Be prepared for a long rant on the complexities and vagaries of finding good talent and utilizing it to the fullest.

    PMO enables application of what we sell, analytics, to our own processes for betterment and continuous improvement

    Challenges at scale

    You would have figured out by now that analytics services companies enable their clients to solve complex business problems. And since each business problem is unique, the approach taken to solve it becomes unique as well. This leaves us with a large set of unique, mutually exclusive analytics projects running at any given point in time; each requiring a separate set of resources, time and infrastructure.

    Small analytics organizations can handle this complexity because of multiple factors – a very strong and smart core team, fewer projects to manage, and lower layers of hierarchy within the organization. But as the analytics services company grows, it becomes increasingly difficult to ensure each project is running efficiently and on the right track. The problem is exacerbated by two facts: the flexibility of a startup is not easily scalable; and resistance in putting process to bring some order in to the system is something employees – especially old timers – chafe at. This is where the prominence of PMO kicks in.

    Setting up, and moving beyond the traditional PMO

    When a startup evolves into a mature, established analytics services company, it usually veils the fact that the company lacks strong processes to scale. In the absence of organization-wide standard processes for running projects, processes in silos start to take form, or in some cases the absence of it altogether.

    But this leads to inconsistencies in how project delivery is executed. Similar projects are often estimated in different and sometimes erroneous ways; projects are staffed with people who don’t have the right skills, and knowledge often gets lost when team members attrite. Adding to the list of pains, projects don’t get invoiced in time, invoicing schedules are not consistent, and many projects are executed without formal contracts in place. Senior leadership also lacks a common view into the health of project delivery and the pulse of resources working on these projects, at the ground level.

    A good PMO organization faces the same problems as a kite flyer – too many processes, and the kite will never take off; too few, and the kite flies off into the wind. But kite flying technique is important as well.

    The focus of a traditional Project Management Organization (PMO) is more towards ensuring projects are completed on schedule, and processes are followed the right way. However, for true maturity in delivering analytics services, PMO needs to move beyond just process focus. It should allow improved project planning, monitoring and control

    It should ensure the right issues are identified at the right time and addressed accordingly. It should ensure people across the organization speak the same language and terms, and provide the leadership team a single view into business performance. At the tactical level, a PMO group should help employees become more efficient and process-oriented. It should foster a culture of accountability, automation and quality control to ensure improved satisfaction for clients as well.

    The right level of process

    Setting up a PMO group is only half the battle won. The PMO setup needs to regulate the proverbial oxygen flow so employees don’t feel constricted in a mire of process bureaucracy; or on the other hand continue in a false euphoria of individual project flexibility. Internal change management needs to be a smooth process. While adding processes layer by layer, care needs to be taken to ensure that employees do not feel “pained” by the PMO “demands”, in addition to their day to day deliverable.

    At Tredence, the PMO drives improved quality and timeliness of work outputs, while also serving as a means to achieve work-life balance for our employees. Through a well-planned alignment of employees to the projects, which best match their skills, we ensure each team is best equipped to deliver more than the promised results to our clients. In our next blog, we shall discuss in more detail how our PMO group drives improved efficiencies within Tredence and makes our employees more efficient and happy.

    So what does the PMO role in your organization look like? Share your thoughts and best practices with us in the comments section.

    The post A new spin to PMO: Driving excellence in a complex business environment appeared first on Tredence.

    ]]>
    http://www.zhazhai36.com/blog/a-new-spin-to-pmo-driving-excellence-in-a-complex-business-environment/feed/ 0
    Data Lakes: Hadoop – The makings of the Beast http://www.zhazhai36.com/blog/data-lakes-hadoop-the-makings-of-the-beast/ http://www.zhazhai36.com/blog/data-lakes-hadoop-the-makings-of-the-beast/#comments Thu, 08 Jun 2017 08:24:43 +0000 http://www.zhazhai36.com/?p=965 1997 was the year of consumable digital revolution - the year when cost of computation and storage decreased drastically resulting in conversion from paper-based to digital storage. The very next year the problem of Big Data emerged. As the digitalization of documents far surpassed the estimates...

    The post Data Lakes: Hadoop – The makings of the Beast appeared first on Tredence.

    ]]>

    1997 was the year of consumable digital revolution – the year when cost of computation and storage decreased drastically resulting in conversion from paper-based to digital storage. The very next year the problem of Big Data emerged. As the digitalization of documents far surpassed the estimates, Hadoop was the step forward towards low cost storage. It slowly became synonymous and inter-changeable with the term big data. With explosion of ecommerce, social chatter and connected things, data has exploded into new realms. It’s not just the volume anymore.

    In part 1 of this blog, I had set the premise that the market is already moving from a PPTware to dashboard and robust machine learning platforms to make the most of the “new oil”.

    Today, we are constantly inundated with terms like Data Lake and Data Reservoirs. What do these really mean? Why should we care about these buzz words? How does it improve our daily lives?

    I have spoken with a number of people – over the years – and have come to realize that for most part they are enamoured with the term, not realizing the value or the complexity behind it. Even when they do realize, the variety of software components and the velocity with which they change are simply incomprehensible.

    The big question here would be, how do we quantify Big Data? One aspect to pivot is that it is no longer about the volume of data you collect, rather the insight through analysis that is important. Data when used for the purpose beyond its original intent can generate latent value. Making the most of this latent value will require practitioners to envision the 4V’s in tandem – Volume, Variety Velocity, and Veracity.

    Translating this into reality will require a system that is:

    • Low cost
    • Capable of handling the volume load
    • Not constrained by the variety (structured, unstructured or semi-structured formats)
    • Capable of handling the velocity (streaming) and
    • Endowed with tools to perform the required data discovery, through light or dark data (veracity)

    Hadoop — now a household term — had its beginnings aimed towards web search. Rather than making it proprietary, the developers at Yahoo made a life-altering decision to release this as open-source; deriving their requisite inspiration from another open source project called Nutch, which had a component with the same name.

    Over the last decade, Hadoop with Apache Software Foundation as its surrogate mother and with active collaboration between thousands of open-source contributors, has evolved into the beast that it is.

    Hadoop is endowed with the following components –

    • HDFS (Highly Distributed File System) — which provides centralized storage spread over number of different physical systems and ensures enough redundancy of data for high availability.
    • MapReduce — The process of distributed computing on available data using Mappers and Reducers. Mappers work on data and reduce it to tuples and can include transformation while reducers take data from different mappers and combines them.
    • YARN / MESOS – The resource managers that control availability of hardware and software processes along with scheduling and job management with two distinct components – Namely ResourceManager and NodeManager.
    • Commons – Common set of libraries and utilities that support other Hadoop components.

    While the above forms the foundation, what really drives data processing and analysis are frameworks such as Pig, Hive and Spark for data processing along with other widely used utilities for cluster, meta-data and security management. Now that you know what the beast is made of (at its core) – we will cover the dressings in the next parts of this series. Au Revoir!

    The post Data Lakes: Hadoop – The makings of the Beast appeared first on Tredence.

    ]]>
    http://www.zhazhai36.com/blog/data-lakes-hadoop-the-makings-of-the-beast/feed/ 1
    From the norm to unconventional analytics: Beyond owning, to seeking data http://www.zhazhai36.com/blog/norm-unconventional-analytics-beyond-owning-seeking-data/ http://www.zhazhai36.com/blog/norm-unconventional-analytics-beyond-owning-seeking-data/#respond Thu, 18 May 2017 14:02:54 +0000 http://www.zhazhai36.com/?p=720 The scale of big data, data deluge, 4Vs of data, and all that’s in between… We’ve all heard so many words adjectivized to “Data”. And the many reports and literature has taken the vocabulary and interpretation of data to a whole new level. As a result, the marketplace is split into ...

    The post From the norm to unconventional analytics: Beyond owning, to seeking data appeared first on Tredence.

    ]]>
    Shashank Dubey
    Shashank Dubey
    Co-founder and Head of Analytics, Tredence

    The scale of big data, data deluge, 4Vs of data, and all that’s in between… We’ve all heard so many words adjectivized to “Data”. And the many reports and literature have taken the vocabulary and interpretation of data to a whole new level. As a result, the marketplace is split into exaggerators, implementers, and disruptors. Which one are you?

    Picture this! A telecom giant decides to invest in opening 200 physical stores in 2017. How do they go about solving this problem? How do they decide the most optimal location? Which neighbourhood will garner maximum footfall and conversion?

    And then there is a leading CPG player trying to figure out where they should deploy their ice cream trikes. Now mind you, we are talking impulse purchase of perishable goods. How do they decide the number of trikes that must be deployed and where, what are the flavours that will work best in each region?

    In the two examples, if the enterprises were to make decisions based on the data available to them (read owned data), they would make the same mistakes day in and day out – of using past data to make present decisions and future investments. The effect of it stares at you in the face; your view of true market potentials remains skewed, your understanding of customer sentiments is obsolete, and your ROI will seldom go beyond your baseline estimates. And then you are vulnerable to competition. Calculated risks become too calculated to game change.

    Disruption in current times posits enterprises to undergo a paradigm shift; from owning data to seeking it. This transition requires a conscious set-up:

    Power of unconstrained thinking

    As adults, we are usually too constrained by what we know. We have our jitters when it comes to stepping out of our comfort zones – preventing us from venturing into the wild. The real learning though – in life, analytics or any other field for that matter – happens in the wild. To capitalize on this avenue, individuals and enterprises need to cultivate an almost child-like, inhibition-free culture of ‘unconstrained thinking’.

    Each time we are confronted with unconventional business problems, pause and ask yourself: If I had unconstrained access to all the data in the world, how would my solution design change; What data (imagined or real) would I require to execute the new design?

    Power of approximate reality

    There is a lot we don’t know and will never know with 100% accuracy. However, this has never stopped the doers from disrupting the world. Unconstrained thinking needs to meet approximate reality to bear tangible outcomes.

    Question to ask here would be – What are the nearest available approximations of all the data streams I dreamt off in my unconstrained ideation?

    You will be amazed at the outcome. For example, the use of Yelp to identify the hyperlocal affluence of catchment population (resident as well as moving population), estimating the footfall in your competitor stores by analysing data captured from several thousand feet in the air.

    This is the power of combining unconstrained thinking and approximate reality. The possibilities are limitless.

    Filter to differentiate signal from noise – Data Triangulation

    Remember, you are no longer as smart as the data you own, rather the data you earn and seek. But at a time when data is in abundance and streaming, the bigger decision to make while seeking data is identifying “data of relevance”. An ability to filter signals from noise will be critical here. In the absence of on-ground validation, Triangulation is the way to go.

    The Data ‘purists’ among us would debate this approach of triangulation. But welcome to the world of data you don’t own. Here, some conventions will need to be broken and mindsets need to be shifted. We at Tredence have found data triangulation to be one of the most reliable ways to validate the veracity of your unfamiliar and un-vouched data sources.

    Ability to tame the wild data

    Unfortunately, old wine in a new bottle will not taste too good. When you explore data in the wild – beyond the enterprise firewalls – conventional wisdom and experience will not suffice. Your data scientist teams need to be endowed with unique capabilities and technological know-how to harness the power of data from unconventional sources. In the two examples mentioned above – of the telecom giant and CPG player – our data scientist team capitalized on the freely available hyperlocal data to conjure up a great solution for location optimization; from the data residing in Google maps, Yelp, and satellites.

    Having worked with multiple clients, across industries, we have come to realize the power of this approach – of owned and seeking data; with no compromise on data integrity, security, and governance. After all, game changer and disruptors are seldom followers; rather they pave their own path and chose to find the needle in the haystack, as well!

    Does your organization disrupt through the approach we just mentioned? Share your experience with us.

    The post From the norm to unconventional analytics: Beyond owning, to seeking data appeared first on Tredence.

    ]]>
    http://www.zhazhai36.com/blog/norm-unconventional-analytics-beyond-owning-seeking-data/feed/ 0
    Making the Most of Change (Management) http://www.zhazhai36.com/blog/change-management/ http://www.zhazhai36.com/blog/change-management/#respond Thu, 18 May 2017 12:51:24 +0000 http://www.zhazhai36.com/?p=713 “Times have changed.” We’ve heard this statement ever so often. Generations have used it to exclaim “things are so complicated (or simple) these days,” or expressing disdain – “oh, so they think they are a cool” generation. Whichever way you exclaim, change has been truly the “constant”....

    The post Making the Most of Change (Management) appeared first on Tredence.

    ]]>

    “The illiterate of the 21st century will not be those who cannot read and write, but those who cannot learn, unlearn, and relearn.”

    – Alvin Toffler

    “Times have changed.” We’ve heard this statement ever so often. Generations have used it to exclaim “things are so complicated (or simple) these days,” or expressing disdain – “oh, so they think they are a cool” generation. Whichever way you exclaim, change has been truly the “constant”.

    This change is bolstered by a tech-enabled world where the speed at which machines are learning is accelerating – the speed of light.

    Let me set this in context with an example from the book of Sales. Unlike in the past, today sales reps are not gauged by the amount of sweat trickling down their foreheads. While they continue to be evaluated in terms of business development and lead conversions, it is not all manual and laborious. Technology advancements have made the process of identifying, prioritizing, scheduling, conversing and converting agile and real-time.

    But just knowing change, gathering data and appreciating technology will not suffice. The three will need to be blended seamlessly to yield transformation. Applied to deeper organizational context, “Change” needs to be interpreted – its pace needs to be matched, or even better, its effect needs to be contextualized for differentiation.

    Change management in this sense is the systematization of the entire process; right from the acceptance of change to its adoption and taking advantage of it to thrive in volatile times.

    But what would it take for complex enterprises, that swear by legacy systems, to turbo charge into the Change Management mode?

    To answer this, I will humanize enterprise change management with the Prosci-developed ADKAR Model.

    Awareness (getting into the race) – Where can I set up the next retail store, what is the most optimal planogram, how do I determine the right marketing mix, what is my competition doing different, how do I improve customer experience, how do I ensure sales force effectiveness – the questions are ample. By the time you realize and start strategizing, a competitor has dislodged your market position and eaten a large portion of your pie. And while these business problems seem conventional, volatility in the marketplace cry foul. Compound this with high dependencies on dashboards, applications, and the likes for insights, and you’ve seen the side-effects – established enterprises biting the dust.

    To survive, organizations will need to be knowledgeable about data that matter viz a viz the noise. They will need to interpret the data deluge in relevance and context; after all, not all data is diamond.

    Desire (creating a business case for adoption) – Desire is a basic human instinct. Our insatiable urge to want something more, even better, accentuates this instinct. When it comes to enterprises, this desire is no different; to stay ahead of the curve, to make more profits, to be leaders. But there is no lock-and-key fix to achieve this mark. Realizing corporate “desire” will require a cultural and mindset shift across the organization – top-down. And so, one of the most opportune times could be when there are changes at the leadership, followed by re-organization in the rungs below.

    Gamification could be a great starting point to drive adoption in such cases. Allow the scope of experimentation to creep in; invest consciously in simmer projects; give a freehand to analysts to look for the missing piece of the puzzle outside their firewall; incentivize them accordingly. Challenge business leaders to up their appreciation for the insights generated, encourage them to get their hands down and dirty when it comes to knowing their source, ask the right questions and challenge status quo – not just rely on familiarity and past experiences.

    Knowledge and Ability (From adoption to implementation) – In business context, “desire” typically translate into business goals – revenue, process adoption, automation, newer market expansion, launch of a new product/solution, etc. Mere awareness of the changes taking place does not translate into achievements. It needs to be studied and change management needs to be initiated.

    But how can you execute your day job and learn to change?

    The trick here will be to make analytics seamless; almost second nature. Just as the message alert from the bank about any suspicious transaction made on your account, any deviation from the set course of business action needs to be alerted.

    Such technology-assisted decisions are the need of today and the future. Tredence CHA solution is an example in this direction. It is intuitive, convenient and evolving, mirroring aspects of Robotics Process Automation (RPA).

    Reinforcement (Stickiness will be key) – Your business problems are yours to know and yours to solve. Like my colleague mentioned in his blog, a one size fits all solution does not exist. Solving the business challenges of today requires going to the root cause of it, understanding the data sources available to you, and being knowledgeable about other data combinations (across the firewall or within) that matter. Match this stream of data with relevant tools and techniques that can give you the “desired” results.

    Point to keep in mind during this drill is to ensure that you marry the old and new. Replacing a legacy system with something totally new could leave a bad taste in your mouth – with less adoption and greater resistance. Embedded analytics will be key – one that allows you to seamlessly time travel between the past, present and future.

    To conclude, whether it is about time to implement change, improving customer service, reducing inefficiencies, or mitigating the negative effect of volatile markets, Change Management will be pivotal. It is a structured, on-going process to ensure you are not merely surviving, rather thriving in change.

    The post Making the Most of Change (Management) appeared first on Tredence.

    ]]>
    http://www.zhazhai36.com/blog/change-management/feed/ 0
    Key to bridging the analytics-software chasm: iterative approach + customized solutions, leading to self-service BI http://www.zhazhai36.com/blog/key-bridging-analytics-software-chasm-iterative-approach-customized-solutions-leading-self-service-bi/ http://www.zhazhai36.com/blog/key-bridging-analytics-software-chasm-iterative-approach-customized-solutions-leading-self-service-bi/#comments Fri, 05 May 2017 15:02:48 +0000 http://www.zhazhai36.com/?p=682 The world of software development and IT services have operated through well-defined requirements, scope and outcomes. 25 years of experience in software development have enabled IT services company to significantly learn and achieve higher maturity. There are enough patterns and standards...

    The post Key to bridging the analytics-software chasm: iterative approach + customized solutions, leading to self-service BI appeared first on Tredence.

    ]]>

    The world of software development and IT services have operated through well-defined requirements, scope and outcomes. 25 years of experience in software development have enabled IT services company to significantly learn and achieve higher maturity. There are enough patterns and standards that one can leverage in-order to avoid scope-creep and make on-time delivery and quality a reality. This world has a fair order.

    It is quite contrary to the Analytics world we operate in. Analytics as an industry itself is a relatively new kid on the block. Analytical outcomes are usually insights generated from historical data viz. a viz. descriptive and inquisitive analysis. With the advent of machine learning, the focus is gradually shifting towards predictive and prescriptive analysis. What usually takes months or weeks in software development usually takes just days in the Analytics world. At best, this chaotic world posits the need for continuous experimentations.

    The question enterprises need to ask is “how to leverage the best of both worlds to achieve the desired outcomes?”, “how do we bridge this analytics-software chasm?”

    The answers require a fundamental shift in perception and approach towards problem solving and solution building. The time to move from what is generally a PPTware (in the world of analytics) to dashboards and furthermore a robust machine learning platform for predictive and prescriptive analyses needs to be as short as possible. The market is already moving towards this said purpose in the following ways:

    1. Data Lakes – These are on-premise and built mostly with the amalgamation of open source technologies and existing COST software’s – homegrown approach that provides single unified platform for rapid experimentation on data along with capability to move quickly towards scaled solutions
    2. Data Cafes / Hubs – Cloud-based SAAS-based approach that allows everything from data consolidation, analysis to visualizations
    3. Custom niche solutions that serve specific purpose

    Over a series of blogs, we will explore the above approaches in detail. These blogs will give you an understanding of how integrated and inter-operable systems rapidly allow you to take your experiments towards scaled solutions, in matter of days and in a collaborative manner.

    The beauty and the beast are finally coming together!

    The post Key to bridging the analytics-software chasm: iterative approach + customized solutions, leading to self-service BI appeared first on Tredence.

    ]]>
    http://www.zhazhai36.com/blog/key-bridging-analytics-software-chasm-iterative-approach-customized-solutions-leading-self-service-bi/feed/ 6
    SOLUTIONS, WHAT’S NEW? http://www.zhazhai36.com/blog/solutions-whats-new/ http://www.zhazhai36.com/blog/solutions-whats-new/#comments Thu, 06 Apr 2017 08:46:02 +0000 http://www.zhazhai36.com/?p=444 The cliché in the recent past has been about how industries are racing to unlock the value of big data and create big insights. And with this herd mentality comes all the jargons in an effort to differentiate. Ultimately, it is about solving problems. In the marketplace abstraction of problem...

    The post SOLUTIONS, WHAT’S NEW? appeared first on Tredence.

    ]]>
    Sagar Balan
    Sagar Balan
    Principal, Customer Success

    Dell, HP, IBM have all tried to transform themselves from being box sellers to solution providers. Then, in the world of Uber, many traditional products are fast mutating into a service. At Walmart, it is no longer about grocery shopping. Their pick and go service tries to understand more about your journey as a customer, and grocery shopping is just one piece of the puzzle.

    There’s a certain common thread that run across all three examples. And it’s about how to break through the complexity of your end customer’s life. Statistics, machine learning, artificial intelligence can’t maketh the life of store managers at over 2000 Kroger stores across the country any simpler. It sounds way too complex.

    Before I get to the main point, let me belabor a bit and humor you on other paradigms floating around. Meta software, Software as a Service, cloud computing, Service as a Software… Err! Did I just go to randomgenerator dot com and get those names out? I swear I did not.

    The cliché in the recent past has been about how industries are racing to unlock the value of big data and create big insights. And with this herd mentality comes all the jargons in an effort to differentiate. Ultimately, it is about solving problems.

    In the marketplace abstraction of problem solving, there’s a supply side and a demand side.

    The demand side is an overflowing pot of problems. Driven by accelerating change, problems evolve really fast and newer ones keep popping up. Across Fortune 500 firms, there are very busy individuals and teams running businesses the world over, grappling with these problems. Ranging from store managers in a retail store, to trade promotion manager in a CPG firm, a district sales manager in a pharma firm, a decision engineer in a CPG firm and so on. For these individuals, time is a very precious commodity. Analytics is valuable to them only when it is actionable.

    On the supply side, there are complex math (read algorithms), advanced technology and smart people to interpret the complexities. And, for the geek in you, this is a candy store situation. But, how do we make these complex math – machine learning, AI and everything else – actionable?

    To help teams/individuals embrace the complexity and thrive in it, nature has evolved the concept of solutions. Solutions aim to translate the supply side intelligence into simple visual concepts. This approach takes intelligence to the edge, thereby scaling decision making.

    So, how do solutions differ from products, from meta-software, service as a software and the gibberish?

    Fundamentally, a solution is meant to exist as a standalone atomic unit – with a singular purpose of making the lives of decision makers easy and simple. It is not created to scale creation of analytics.
    For example a solution created to detect anomalies in pharmacy billing will be designed to do just that. The design of this solution will not be affected by the efficiency motivation to apply it to a fraud detection problem as well. Because the design of a solution is driven by the needs of the individual dealing with the problem, it should not be driven by the motivation to scale the creation of analytics. Rather, it should be driven by the motivation to scale the consumption of analytics; to push all the power of machine learning and AI to the edge.

    In Tredence you have a partner who can execute the entire analytical value chain and deliver a solution at the end. No more running to the IT department with a deck/SAS/R/Python code, asking them to create a technology solution. Read more about our offerings here.

    This blog is the first of the two-part series. The second part will be about spelling the S.O.L.U.T.I.O.N.

    The post SOLUTIONS, WHAT’S NEW? appeared first on Tredence.

    ]]>
    http://www.zhazhai36.com/blog/solutions-whats-new/feed/ 1
    曰本真人性做爰无删减