How a Technical Founder Systematically Tackles Inefficient Processes — from Logistics to Fintech

When a tech company’s internal processes are inefficient, the symptoms aren’t always immediately apparent. On the surface, things seem to function: orders get processed, routes are calculated, support tickets are resolved. But behind the scenes, inefficiencies can quietly build up — excessive manual work, redundant steps, and inconsistent performance. As the company scales, these concealed bottlenecks start to hinder growth.
Solving the problem requires more than piecemeal fixes. It calls for rethinking the architecture, redesigning workflows, and ensuring that systems interact seamlessly. This demands rigorous analysis, close attention to operational detail, and the ability to implement changes without disrupting stable processes.
Manu Ajith is the co-founder and CTO of ReturnQueen, a US-based startup that handles product returns for online purchases from platforms like Amazon and BestBuy. In this interview, he explains how the team identifies points of friction in their operations, automates high-effort processes, and implements change in a way that supports growth — without breaking what’s already stable.
When a System Works — but Holds You Back
ReturnQueen, a US-based startup, helps customers return online purchases from marketplaces like Amazon and BestBuy. The process is straightforward: users schedule a pickup through the app, and a courier collects the item from their doorstep. The company launched quickly during the pandemic, when e-commerce volumes were surging globally.
ReturnQueen was founded in 2020 by Manu Ajith, together with partners Daphna Englard and Dasya Katz. Today, the startup operates in over 10,000 zip codes across more than 20 U.S. cities — with plans to expand into additional key markets. By optimizing the post-purchase experience, ReturnQueen boosts customer satisfaction and drives long-term retention — reshaping what consumers expect from online shopping.
At first glance, the system seemed to work well: returns were processed, couriers arrived on time, and customer support handled inquiries efficiently. However, behind this apparent stability were inefficiencies that made scaling difficult. Many tasks — especially those involving return labels — were handled manually. Users had to locate and upload labels themselves, while support staff frequently assisted with finding the correct portal or troubleshooting upload issues.
“Our early analysis revealed that about 20% of our support team’s time went toward label-related problems — from guiding users through retailer portals to resolving upload issues. These were hidden costs that didn’t seem significant initially,” Manu recalls.
Logistics posed another challenge. The dispatch system didn’t account for geographic proximity or existing routes, leading to redundant trips. Drivers were sent to the same neighborhoods several times a day, significantly increasing mileage per pickup. The underlying issues only became clear after the team connected Metabase to their operational data and analyzed user behavior through PostHog.
How ReturnQueen Decides When to Make a Change
At ReturnQueen, changes aren’t made on a hunch — they’re driven by three key inputs: quantitative data, user behavior insights, and feedback from the operations team. Only when all three indicate the same problem does the team move forward with analysis and potential updates.
One example is how they addressed return shipping labels. Initially, users had to manually upload labels by navigating to the retailer’s website, downloading the file, and then uploading it to the app. Later, ReturnQueen introduced a feature that automatically extracted the correct label from the order confirmation email. Data from Metabase showed that this automation reduced failures — like corrupted or mismatched files — by about 15%. Customer support also noted a decrease in related tickets, while PostHog data confirmed that users relying on manual uploads were more likely to repeat the step or abandon it altogether.
“We don’t just rely on metrics. Sometimes the numbers change, but the team doesn’t notice a real difference. Other times, we catch users struggling at a specific step through session data, even if it doesn’t show up on dashboards. That’s why we take a holistic approach — combining data, on-the-ground workload, and team feedback to determine when and where to step in,” Manu explains.
Why ReturnQueen Had to Automate Data Collection
At ReturnQueen, automation wasn’t introduced overnight — it evolved as the company faced growing operational demands. Initially, while the service operated in just one city, many tasks were handled manually: verifying orders, tracking return deadlines, and coordinating pickups. This manual approach worked in the beginning, but as the team planned to expand into a second market, it became clear that doubling the volume would require far more than just doubling the resources. To scale efficiently without significantly increasing headcount, the team had to automate the most labor-intensive parts of the process.
“We estimated the manpower needed to maintain our current speed, and the math didn’t add up — scaling manually just wasn’t sustainable,” Manu recalls.
The first step was automating data collection. The team developed an ETL process that, with user consent, integrated with Gmail. This system extracted order confirmation emails, identified which items were returnable, tracked return deadlines, and automatically triggered the correct label or routing flow — eliminating the need for manual data entry.
This automation wasn’t just about user convenience — it was a strategic shift. Without it, handling thousands of returns per day would have required a significantly larger team.
“This integration wasn’t merely about improving user experience. It was essential for building a streamlined, automated pipeline that could scale efficiently,” Manu explains.
How the Team Tests Changes Without Disrupting the System
While automation enabled ReturnQueen to scale, it also heightened the risks. Any mistake in the logistics engine could result in missed pickups and erode customer trust, particularly when handling high volumes. The challenge was clear: how to implement changes without jeopardizing the existing system.
To mitigate risks, major updates are first tested in a controlled environment. For instance, when the team introduced a new routing logic based on a VRP (Vehicle Routing Problem) algorithm, they rolled it out exclusively in Austin. Over the next two weeks, they monitored key metrics such as on-time arrival rate, route duration, and average mileage per pickup. They also gathered driver feedback through internal channels to quickly identify any potential issues.
“If something went wrong, we could immediately revert to the previous version. Feature flags make it possible to switch functionalities on or off within minutes,” Manu explains.
Feature flags allow the team to activate specific features without redeploying code. This approach supports controlled experiments, A/B testing, and rapid rollbacks if results deviate from expectations. By following this phased rollout method — combining gradual implementation, continuous monitoring, and the ability to reverse changes — ReturnQueen maintains system stability while introducing improvements.
As manual processes were phased out, the next focus was ensuring that the system remained resilient at scale — not just from a functional perspective, but also in terms of robust architecture.
When It’s Easier to Rebuild the Architecture Than Patch It
As ReturnQueen’s order volume increased, the team faced a critical challenge: the system couldn’t keep up. The platform was originally built as a monolith, meaning all core processes ran within a single block. This setup made it difficult to scale, as any change or failure in one area — like routing logic — could disrupt other functions, including order tracking and data ingestion. Even minor updates required deep, system-wide testing.
To overcome this, the team transitioned to a microservices architecture, breaking the platform into distinct, modular components: order ingestion, routing, and status tracking. Each module was designed with a clear responsibility and could be scaled independently. This shift significantly reduced pressure on individual parts of the system and increased flexibility. Now, updates could be made to one module without risking the stability of the entire platform — a crucial factor for maintaining fast iteration and safe growth.
How the Team Measures Whether a Solution Actually Works
At ReturnQueen, every change is driven by clear, measurable outcomes. Before implementing a new feature or update, the team defines success criteria: Are operations becoming faster? Has the support load decreased? Are users navigating the process more smoothly?
For example, when updating routing logic, the team tracks metrics like average miles per pickup, the success rate of first-attempt pickups, and the time between a return request and the courier’s arrival. If the user interface is updated, they monitor how many users complete the step on the first try, the bounce rate, and how often customers reach out to support.
“We start with a simple question: what exactly should improve if the solution works as planned? That way, we stay focused and can quickly spot when a hypothesis doesn’t hold up,” Manu explains.
How the Team Responds to Unintended Side Effects
Despite thorough planning, unexpected outcomes are sometimes unavoidable — especially when machine learning is involved. At ReturnQueen, this challenge emerged with the return label recognition model. Since these models are trained on large image datasets, they can be highly sensitive to data balance.
One incident occurred when the team released an update aimed at improving performance on blurry label photos. While the new model handled unclear images better, it unexpectedly performed worse on sharp ones. The issue was detected quickly: the dashboard showed a spike in errors in scenarios that had previously been stable.
“We immediately rolled the model back to the previous version — thanks to feature flags, it took just minutes. Then we adjusted the dataset to reduce the bias toward blurry images, tested the model offline, and only redeployed it after confirming consistent performance,” Manu explains.
This swift detect-diagnose-recover approach allows the team to deploy even high-risk changes with minimal disruption, maintaining system stability while continuously improving performance.
Why Not Everything Should Be Automated
Despite ReturnQueen’s emphasis on automation, some processes are intentionally kept human-driven — especially when trust takes precedence over technical accuracy. A prime example is handling high-value disputed returns.
If a customer claims an item arrived damaged but the courier insists it was intact at pickup, the situation rarely has a straightforward answer. Even with photos, logs, and timestamps, an automated system might misinterpret context, leading to a decision that could damage customer trust.
“Of course, we could develop a model to analyze images and text, but the cost of getting it wrong is too high. We deliberately keep these edge cases under human oversight — people can grasp nuance and communicate with empathy, rather than just processing data,” Manu explains.
By maintaining this balance, ReturnQueen ensures that while routine, scalable tasks are automated, human judgment remains central where stakes are too high to rely solely on technology.
How ReturnQueen Evaluates Technology Decisions
For Manu Ajith, effective product development means directly connecting technical solutions with tangible, real-world impact — both for the user and the business. Whether it’s a new feature, algorithm adjustment, or infrastructure change, the focus isn’t on complexity but on measurable outcomes.
“The key question is: what problem are we solving, and how can we prove it? It’s not enough to believe it works — we need data, user sessions, and metrics that clearly show improvements in simplicity, speed, or clarity,” he says.
At ReturnQueen, the benchmark for success is utility, not novelty. Every technical change is required to demonstrate value through evidence rather than assumptions, keeping the team focused on practical, user-centric improvements.
Have you read?
The World’s Best Medical Schools.
The World’s Best Universities.
The World’s Best International High Schools.
The World’s Best Business Schools.
The World’s Best Fashion Schools.
The World’s Best Hospitality And Hotel Management Schools.
Bring the best of the CEOWORLD magazine's global journalism to audiences in the United States and around the world. - Add CEOWORLD magazine to your Google News feed.
Follow CEOWORLD magazine headlines on: Google News, LinkedIn, Twitter, and Facebook.
Copyright 2025 The CEOWORLD magazine. All rights reserved. This material (and any extract from it) must not be copied, redistributed or placed on any website, without CEOWORLD magazine' prior written consent. For media queries, please contact: info@ceoworld.biz