Sri Muthu, VP and Technology and Operations Executive for clearXchange/Wells Fargo, and Bala Subramanian, Chief Development Officer for S&P Capital IQ, discussed the challenges in managing real-time data and the best tools to analyze that information.
Bala Subramanian: Can you start by giving us a little bit of detail on your background and areas of expertise?
Sri Muthu: I’ve been in technology at Wells Fargo since 1999, going through the ranks from an engineer to architect to manager initially in online brokerage and trust systems. I ran Wells Fargo’s online brokerage on the technology side and then some engineering groups within Wells Fargo for online, mobile, configuration management and pre-production engineering areas. Then I founded the incubator and R&D and Wells Fargo Labs in the Internet Group, which then launched a product called Clearxchange, which I was the founding technology and operations leader and architect. I’m currently the head of technology and I report to the CEO. That’s my background from a technology perspective and a company perspective.
So what are some of the best practices and lessons learned from managing real-time data in a large enterprise as successful as yours?
To me, it comes down to design. You have to start with the design and the overall system view of where all the different end points are going to be. Really taking the time to look at a design from an end-to-end perspective is pretty significant, as well as looking at it from the context of having bottlenecks and constraints around real-time data and asynchronous data. Then we have to consider the consumer usage periods and batch windows. You just have different schedules and needs. So it’s not just looking at the design but looking at the design over time periods to understand what are the peak points over a month, a week or a day, and also over a year and maybe even over two or three years as the overall capacity growth comes in. When we’ve seen issues around any of these best practices that you see, it comes back to, “Did you really think through the design? Did you really think through the usage? Did you really think through the time parameter across all the user types that are out there?”
But what is actually needed to scale it across the enterprise in real-time, given the payment system that you‘re running now?
The key to scaling is understanding the difference where the product fits in its lifecycle within the enterprise. If it’s a product or a technology that’s newly brought into the enterprise, it often doesn’t get put in the big bang. It gets put into one organization or one division that decides that they want to try something out. So they tend to often design and build for single point or single group use with the ultimate goal of the enterprise. But what happens is that some of the paradigms and thinking of how the product should be implemented gets constrained by what was initially a small user group or a small deployment.
And when you start really thinking through how you want to scale that, you kind of have to actually step back and look at all your assumptions around the design, around the deployment and around the implementation and operations. Especially when you have a geographically distributed enterprise, you really have to start thinking through how this is going to perform in various locations, whether it’s urban locations or rural locations. And frankly, even sometimes where a lot of people are working remotely, how is the data going to transfer across potentially your VPN or your mobile connectivity? I won’t even get into global yet, but even in the domestic space, what worked really well in a small division in one state is going to be challenging to scale across. It usually works pretty well in sort of the normal usage, but you start seeing problems when you start seeing peak loads.
So it’s about stepping back and looking at what are all the assumptions that went into deploying, designing, buying and implementing this, and what do we need to think through to do it again? It calls for a really disciplined approach to this, to look at every single component. The people, the processes that are put into place and the underlying technology itself need to be sort of reviewed – ideally, by a different pair of eyes than the team that initially put it in – to see if you have any bias that you need to worry about.
“To me, it comes down to design. You have to start with the design and the overall system view of where all the different end points are going to be. Really taking the time to look at a design from an end-to-end perspective is pretty significant.”
You’re dealing with various geographies, as well as types of users and moving data around. How do you create agility while maintaining reliability and resilience?
It is a balance but not as much as people think. I actually think that when you’re really agile and adaptable, you increase your reliability and resilience, because if you’re agile you can make course corrections relatively quickly and inexpensively. In the event you see something that’s potentially going to impact your resiliency or you see something that’s going to impact your uptime, you have the ability to actually correct in an agile manner. So when the team knows they can build and deploy a patch in a few hours or a few days instead of waiting three months or six months to put something in, you have a different paradigm around how to get things done.
There’s also the flip side, which is that you can take in a small amount of work and then un-deploy it; in other words, if you’re able to deploy and un-deploy in under 10 or 20 minutes, then you have the ability to basically see that you tested it, and if it’s only a 20-minute back out, then your reliability and resiliency go up. The other thing that I’ve also noticed in the agility space is a lot of the practices – whether it’s scrum or whether it’s peer programming – tend to really allow for multiple eyes to look at the design, look at the underlying code, look at the different things that people are doing and work on how it actually gets done. That’s instead of spending a lot of time creating a document or an artifact that frankly tends to become obsolete as soon as it’s complete, because nothing stays constant. Things always change. So being able to adapt to new vendors, new structures, new patches, new customer demand and new sales opportunities really makes the product design different. Start thinking of it from a perspective of, “How do I make sure it’s built into it?” It’s not an either or, it’s really more of an adjunct to help get reliability. You can build a giant fortress around something, but all it takes is one failure and you’ll never get anything to work again the way you thought it did.
When you get at an enterprise level, how do you effectively organize, deliver and normalize data? How do you enable quality decision-making in that space?
This one’s actually probably the harder challenge, because what we’ve realized is that pretty much all shareholders have different perspectives on what data is important to them. And when thinking through whether it’s a financial decision-maker or a risk decision-maker, whether it’s a consumer-facing decision-maker or a product person, it’s really about trying to not come up with the least common denominator, but instead providing different views into the data or those different lenses. So in other words, my finance teams are going to show that this is our revenue opportunity and this is what the numbers look like; what’s revenue and cost? For my risk person, we slice and dice the data to show that this is obviously always the same data, but these are some of the risks that we’re going to incur; this is how we’re going to mitigate this, and this is sort of historical data that shows what our risk mitigations have done or not done. And it’s the same thing for the product side.
But at least in our case, it really comes down to trying to use the same data but at least slicing and viewing it differently, so that folks have the preferred lens that they perceive to be better. Little things make a difference, so it’s about making sure that everybody’s data is in the same time zone, and making sure that everybody’s data is normalized for the correct period; such as, “What does the end of the month look like? Is it end of month in the West Coast or is it the 31st on the East Coast?” Especially for a large distributed organization, getting everyone on the same page as to that data makes a big difference.
And then decision-making is a process of reducing your choices down, so we start by diverging and intentionally providing five, six or seven views of the data. We provide three or four different opportunities or options of how things can be done – the pros and cons – and then we intentionally converge on maybe one or two common solutions. Once you get to those one or two common solutions, it’s good if you know that your data that you’re providing is consistent across everybody; we all may look at it from a different lens but it’s the same underlying data.
You’re really comparing apples to apples at that point, which are the two options that you want to make a decision about and not being “my data shows something else.” We all want to be on the same page; it’s the same data and maybe we all just have a difference lens.
So that’s typically what we’ve tried to do, but it’s not an easy solution. We continue to look at providing new vendors, new providers, new data and new ways to aggregate data and provide data in even more of a real-time manner, rather than periodic reporting. Maybe we show data in real time, but as a trend over time. That way, even during the decision process ,if we made a report on the beginning of the month and we’re coming to a decision on the 15th of the month, maybe it’s worthwhile to update the report on the 15th even though it’s extra work. And we’re still thinking through, “What is the balance between providing new and current data versus historical?” It’s going to be a challenge I think.
“Things always change. So being able to adapt to new vendors, new structures, new patches, new customer demand and new sales opportunities really makes the product design different.”
So what are some of the innovative things that your firm is working on in terms of interpreting and specializing data? What insights do you have toward making it easy for people to consume that internally as well as externally?
Obviously, there are lots and lots of tools, and I won’t mention any vendors’ names, but we’ve been primarily looking at the ones that allow self-service. If nothing else , that’s probably the biggest difference that we’re trying to get to, and we’re in the process of getting to that with some of the products we’re working with. We’re really looking to it as a self-service model for letting the end users manipulate and visualize the data the way they want it.
So you want tools that allow you to basically go to either a website or a local client, or even within your Excel application, and look at pre-made templates and pre-made reports. They should also give you the opportunity to create customer reports as you need or templatize the reports, all going back to the same data in real-time so you don’t have this cycle of somebody asking for some data; you go to the back shop, you ask your data analyst to go run some reports and send it back. We’re trying to reduce that round trip between the request from the user all the way back to the data and back, and trying to really go to the self-service more. I think that’s fundamentally the biggest innovation we’re going to see: letting the analyst, letting the product folks, letting the finance folks directly get to the data. Obviously, you’re going to have to scrub it and clean it up and make sure it’s secure, and ensure that you have the right controls around who’s looking at what data. It’s also about putting those access control layers in, putting the effort into finding a solution where the end internal customer can go in and look at that data themselves without having to come back to us.
When you look at some of the vendors in the top quadrant, there are some really good products out there that are in that mindset. It fundamentally shifts more away from the idea that I have to go to my data analyst who creates reports and creates visualization, as opposed to saying that now we’re going to give you all the tools so you can do it yourselves. We’ll just make sure the data is correct and secure; we’ll make sure that you have the right controls and we’ll help you with some canned examples and templates. You can actually slice and dice this anyway that you want. And that I think that has made the biggest shift from what we’ve done from a data-visualization perspective.
Interesting, before we conclude, is there anything else you’d like to add?
One thing I think about when it comes to the Argyle Forum is that we all learn from each other; the general level and the specifics level, any feedback or comments or best practices or ideas. Share as much as you can within sort of proprietary contacts but share them, because it helps the industry as a whole and it helps all of us as a whole be successful in what we’re trying to deliver.
Sri Muthu is Head of Technology for clearXchange, a payments joint venture of Bank Of America, Capital One, JPM Chase and Wells Fargo. As a co-founding technologist, he has had responsibility for various areas since 2011. He was previously with Wells Fargo since 1999 and co-founded and managed Incubator Labs and R&D for Internet Services since 2007. He managed Online Investments Technology from 2002 to 2007 and was responsible for Online Brokerage & Trust sites and OFX & Investment Web Services. He also previously managed Internet Services Configuration Management and Pre-Production Groups.
Sri attended Regional Applied Computing Center in Singapore, Virginia Tech and recently completed the Strategic Decision and Risk Management program at Stanford. He previously held FINRA 7, 63 and 24 registrations and has completed CISA, CISSP, CISS, CISM and CGEIT examinations.
Bala Subramanian is Chief Development Officer at S&P Capital IQ. Prior to that, as Chief Technology Officer at S&P Dow Jones Indices, he was responsible for technology strategy, application development, technical operations and infrastructure. Bala joined S&P from Citigroup Asset Management (CAM) where he managed the build out of a suite of technology platforms for investment research, portfolio management and equity trading. He also successfully led an effort to create the first industry-leading investment advisory and trading platform for CAM’s Private Portfolio Group. Bala played a lead role in integrating the Legg Mason and CAM platforms when they merged in 2005.
Bala holds a bachelor’s degree in engineering with honors from Madras University and an MBA in finance from NYU Stern School of Business.