← Back to Service

Addressing Salesforce Technical Debt

When to rebuild vs. optimize.

Financial Services, SalesforceSep 18, 20249 min read

If your Salesforce instance is more than three years old, it has problems. That's not a guess - it's a pattern we see constantly. Layers of quick fixes, undocumented integrations, automation that breaks mysteriously, and tribal knowledge that walked out the door with former employees.

The question isn't whether you have technical debt. It's which parts are actually hurting you.

How Salesforce instances accumulate debt

It happens gradually, then all at once. A pattern we've seen hundreds of times:

  1. Year 1: Clean implementation. Everything makes sense. Documentation exists.
  2. Year 2: Business changes. Quick workarounds get added. "We'll clean this up later."
  3. Year 3: Original admin leaves. New person doesn't touch anything they don't understand.
  4. Year 4+: Layer upon layer. Nobody knows what depends on what. Fear of breaking things prevents any cleanup.

By the time someone calls us, users have lost trust in the data, reports don't match reality, and every new request takes three times longer than it should.

Here's a concrete example: we inherited a client org where a previous deployment had 0% test coverage. Salesforce requires 75% coverage for production deployments, but there are workarounds - deploying with "Run Specified Tests" lets you cherry-pick which tests run, and as long as the org's overall coverage stays above 75%, untested code slips through. Each deployment borrows against the coverage buffer until someone gets stuck.

That someone was us. Our components had 95-100% coverage, but the org's overall coverage had been dragged down by years of untested deployments. We couldn't deploy our work until we wrote tests for code we didn't write - cleaning up debt that had been accumulating invisibly for years.

Rebuild vs. optimize: the real decision

The knee-jerk reaction is "burn it down and start over." Usually wrong. Full rebuilds are expensive, risky, and often recreate the same problems with different technology.

Optimize when:

  • The data model is basically sound. You're fighting implementation, not architecture.
  • Core business processes are captured correctly, even if sloppily.
  • Users know how to use the system. Training investment would be lost in a rebuild.
  • Integrations work, even if they're complex under the hood.

Rebuild when:

  • The data model fundamentally misrepresents your business.
  • You've outgrown the edition or license structure.
  • Technical debt is so severe that every change is a multi-day project.
  • You're moving to a different platform anyway.

Most "rebuild" situations are actually "rebuild this one subsystem" situations. Rarely do you need to nuke the whole thing.

Prioritizing: what actually matters

You can't fix everything. You probably shouldn't. Here's how we prioritize:

  • Revenue impact: Anything that directly affects deals closing or money collected. This is first.
  • User adoption blockers: If reps don't use the CRM, the CRM is worthless. Fix the friction points that drive workarounds.
  • Data quality foundations: Bad data cascades. Fix the source of dirty data before fixing downstream reports.
  • Performance problems: Slow pages and timeout errors erode trust. Speed matters more than features.
  • Compliance risks: Anything that could get you in trouble with regulators or auditors.

Notice what's not on this list: cosmetic improvements, nice-to-have dashboards, fields nobody uses. Those can wait.

Apex vs. declarative: picking the right tool

Salesforce consultants fall into two camps: "everything should be clicks" and "real developers write code." Both are wrong.

Use declarative tools (Flows, validation rules, formula fields) when:

  • Logic is straightforward and unlikely to change frequently.
  • Your admin team can maintain it without developer help.
  • There's no performance concern (low volume, simple operations).

Use Apex when:

  • Logic involves complex calculations or external callouts.
  • You're hitting governor limits with declarative approaches.
  • You need robust error handling and logging.
  • Performance matters (bulk operations, large data volumes).

The worst outcomes we see are Flows that should have been Apex. Flows are great until they're not - and when they break, debugging is painful.

Integration reality

Every company thinks their integration requirements are unique. They're usually not. The patterns are well-known:

  • Real-time sync: When a record changes here, update it there immediately. Works for low volume, gets expensive fast.
  • Batch sync: Sync everything every night (or hour). Simpler, cheaper, but data is always somewhat stale.
  • Event-driven: Publish changes, let subscribers decide what to do. More complex setup, more flexible long-term.

What makes integrations fail:

  • No error handling. The happy path works. Everything else is silent failure.
  • No monitoring. You find out things are broken when users complain.
  • No documentation. The original developer's laptop is the only reference.

When Salesforce hits its limits

Sometimes the answer isn't better Salesforce code. It's recognizing when you've hit platform limits that no amount of optimization will fix.

Salesforce governor limits exist for good reason, but they create hard ceilings:

  • 100 callouts per transaction. Processing 10,000+ external records? That's 100 separate transactions minimum.
  • 12MB heap size. Large data transformations crash before they complete.
  • 10 seconds CPU time. Complex calculations timeout on bulk operations.
  • 50 SOQL queries per transaction. Nested loops become architecture problems.

We see this play out constantly. A financial services client needed to sync regulatory data for 8,700+ institutions and run real-time matching across 11,800+ accounts. Their previous vendor spent six months building a solution that only worked on point-in-time snapshots - export the data, run the match, produce a list. By the time the snapshot was taken, the data was already stale. Reps couldn't answer "what can I sell this client right now?" because the system couldn't query current holdings against current offerings.

The difference between a dead report and a live sales tool is whether it queries against current data or yesterday's snapshot. Real-time matching requires architecture that Salesforce alone can't provide.

The hybrid pattern works like this:

  • Heavy processing: AWS Lambda downloads, parses, and stores external data. No governor limits. Handles the 40MB+ quarterly regulatory files that would crash Salesforce.
  • User experience: Salesforce displays results and captures user actions. Familiar interface, built-in security, existing workflows.
  • Real-time bridge: Salesforce calls AWS endpoints on demand. Sub-3 second response times across the full dataset. Platform Events trigger batch jobs when new data arrives.

Within three weeks of launch, the client closed $50M in new placements. The matching engine surfaced opportunities that manual processes missed - 40+ qualified matches per search versus the 10-15 reps could remember. Year-over-year transaction growth hit 32%, compared to a 1.3% industry baseline.

Cost matters too. Here's what the same workload costs across platforms:

  • Heroku (Salesforce's own cloud): PostgreSQL $50-200/month, Dynos $25-250/month, Heroku Connect $200/month minimum, Private Spaces for VPC security $1,000+/month. Total: $350-1,650/month.
  • AWS (our approach): Lambda ~$10/month, RDS PostgreSQL ~$25/month, API Gateway ~$5/month, NAT Gateway ~$35/month. Total: ~$78/month.

That's not a typo. The annual difference is $3,200-19,000 in infrastructure costs alone. When you need both Salesforce expertise and cloud infrastructure knowledge, that's where the real value lives.

Query patterns that actually scale

Governor limits aren't just about external callouts. The 50 SOQL queries per transaction limit kills most naive implementations. If you're looping through records and querying related data for each one, you'll hit the wall fast.

The pattern that works: aggregate queries that let the database do the math.

  • Bad: Loop through 3,000 transactions, query holdings for each. That's 3,000 queries - you're dead before you start.
  • Good: One aggregate query with GROUP BY and SUM(). The database returns totals by institution. One query, same result.

For the financial services client mentioned above, this meant querying 3,000+ active transactions and calculating compliance limits per institution - not per deal. FDIC insurance limits apply at the institution level, so holdings at the same bank across multiple deals must be combined. An aggregate query handles this in one pass.

When you need to exclude records based on calculated thresholds (like "exclude institutions where combined holdings exceed $205K"), build the exclusion set first, then run the main query with a NOT IN clause. Two queries instead of thousands.

Data quality matters too. During the initial sync, we discovered 117 "ghost" institutions - accounts marked active in Salesforce that had actually been closed through mergers or failures. Bad data doesn't just slow you down; it erodes trust in the system. When reps don't trust the data, they stop using the CRM.

The real cost of Salesforce talent

Hiring a senior Salesforce developer costs $150-200K+ in salary alone. Add benefits, recruiting fees, and the 3-6 months to find someone good. Then they leave in two years and you start over.

Here's what companies actually need:

  • Complex builds: Integrations, Apex development, architecture decisions. This is project work, not a full-time role.
  • Technical debt cleanup: Intensive for 3-6 months, then maintenance. Hiring full-time for a temporary spike doesn't make sense.
  • Ongoing optimization: Continuous improvement that requires senior expertise but not 40 hours a week of it.

Working with a partner gives you senior Salesforce architects, developers, and specialists without the hiring timeline or overhead. You get the expertise for the project, then ongoing support as you need it.

The test coverage example from earlier illustrates why ongoing support matters. That client didn't know they had a problem until a new project got blocked. Years of untested deployments created invisible debt that only surfaced when we tried to deploy code that actually met Salesforce standards. Cleaning up someone else's shortcuts cost hours of work that could have been avoided with consistent oversight.

Ongoing admin support isn't about having someone on standby for emergencies. It's about catching problems before they compound - reviewing deployments, maintaining test coverage, monitoring integration health, and keeping documentation current. The cost of prevention is a fraction of the cost of cleanup.

Most of our clients start with a cleanup project and stay for ongoing optimization. Not because they're locked in, but because the results speak for themselves.

Ready to fix your Salesforce?

We'll assess your instance, prioritize what matters, and fix it. Senior Salesforce architects and developers, outcomes-based pricing, no long-term contracts required.

Book a call

or email partner@greenfieldlabsai.com

Don't Miss These