HM
Loading · 00
H. Marlecha  ·  Business & Data Analyst  ·  East Hanover, NJ

Most of the job is translation. I get it right.

Junior BSA at Pharmvista Inc. Mapping departments, tools, and handoffs for the systems team.
Keep going
§ 01 The thesis

Most of the job is getting ops, leadership, and IT to agree on what the problem actually is. I spend my days writing that down in words all three can read.

HM · East Hanover · 2026
Nine case studies · three full time & internship, two campus leadership, one personal, three independent
Current role B2B contract manufacturing · since Apr 2026

Pharmvista Junior BSA

Assignments live5
01 · Pharmvista Inc.
Making a growing business legible

Junior BSA at a fast growing gummy contract manufacturer. My portfolio covers the website audit, AI workflow mapping across departments, direct to consumer channel requirements, SEO, and social strategy. The job is to write down what the business is actually doing so the systems team can build to it.

Notion · Miro · Power BI
Requirements · mapping
Stakeholder interviews
Read the case
Internship Safety operations · 2025

Lhoist SOP consolidation

Onboarding time−20%
02 · Lhoist North America
One template everyone hated equally

Multiple terminals, multiple versions of the same safety SOP. Mapped the overlap in Excel, consolidated into one Power BI tracked template with site appendices, and built the training content behind it. Internship extended twice.

Power BI · DAX
SharePoint · Clipchamp
Process mapping
Read the case
STAGE
Part time Campus operations · 2024 to 2025

UTA Event Ops

Events delivered36+
03 · UTA University Center
Setup Crew to Event Personnel in eleven months

Two promotions in eleven months running setup, AV, and live support across events from fifty person workshops to five hundred person conferences. Project management with an audience watching you do it. Best training I've had on handling things going wrong in real time.

Leadership · logistics
Teams of 15+
Live incident response
Read the case
247 38 12
Leadership Honor society · 2023 to 2025

BGS social strategy

Tenure22 mo
04 · Beta Gamma Sigma, UTA
Growing an honor society's social presence

Social Media Marketing Officer for UTA's Beta Gamma Sigma chapter across 22 months. Built the content calendar, the visual system, and the engagement feedback loop. The lesson: social isn't about the organization, it's about the people in it.

Canva · Instagram
LinkedIn · Notion
Engagement analytics
Read the case
Personal Systematic trading · since 2024

Options trading system

Trades journaled500+
05 · Personal project
Building a rule based system from scratch

Fourteen months of ICT price action study, Pine Script coded setups, paper trades on NQ and ES, and a disciplined journal. Not a side hustle. A training ground for the same systematic thinking the analyst work asks for.

Pine Script · Python
TradingView · Notion
Backtesting · journaling
Read the case
Independent Weekend writeup · 2025

SaaS cohort teardown

Writeup4,200 words
06 · Self initiated
Why "average churn" lies to you

A weekend exercise on synthetic SaaS subscription data. Cohort retention curves, segmented LTV, and the specific ways a single monthly MRR average hides a brutal sub segment. Worked example, clear conclusions.

Python · pandas
SQL · Matplotlib
Cohort analysis
Read the case
Independent Framework · 2025

SKU rationalization

Framework6 steps
07 · Self initiated
How mid market CPG brands kill SKUs wrong

Most SKU cut decisions get made on revenue alone, which is how a brand loses its best margin line. Six step framework that weighs velocity, contribution margin, and cannibalization, with a worked example using public retail data.

Excel · Power BI
Pareto analysis
Margin modeling
Read the case
BRD · Returns Portal v1 US-01 US-02 US-03 PG 12 OF 28
Independent Requirements doc · 2026

Returns portal BRD

Doc length28 pg
08 · Self initiated
A full BRD for a B2B returns portal

Picked a real problem that comes up constantly in mid market B2B e-commerce, wrote the requirements doc a BSA would ship. User personas, user stories, functional and non functional requirements, process flows, acceptance criteria, and a vendor neutral solution scorecard.

User stories · BPMN
Acceptance criteria
Figma · Lucidchart
Read the case
VENDORCORE FITINTEG.TCOTIMESCORE NetSuiteSAP B1MS DynamicsAcumatica 4.2 4.5 3.1 4.0 3.9 2.8 2.3 3.4 2.9 2.8 3.8 3.2 3.0 2.8 3.2 3.3 3.5 4.1 4.4 3.8 ← recommended
Independent Vendor selection · 2026

ERP vendor selection

Vendors scored4
09 · Self initiated
Replacing a legacy ERP without the chaos

A structured evaluation of four mid market ERPs against a twenty five criterion scorecard covering core fit, integrations, total cost of ownership, and implementation risk. Ends in a recommendation memo the CFO and the IT director can both sign.

Scorecards · RACI
RFP & demo scripts
TCO modeling
Read the case
Five habits, in order

How I actually work.

Five habits I've picked up across internships, campus jobs, and a year of trading. They're not a framework. They're what I do. Scroll sideways.

Habit 01
Before anything

Listen longer than feels comfortable.

The person with the problem usually knows the answer. They just haven't said it yet. I don't interrupt.

Habit 02
Cheap before fancy

Pull the smallest useful sample.

I check the shape of the data against the question before running anything at scale. A thousand rows is usually enough to catch a broken schema.

Habit 03
The hard part

Start with the decision.

I write the one sentence the decision maker needs to say when the number lands, then work backward. Saves me from building dashboards nobody uses.

Habit 04
Ship ugly

Ship the boring version first.

A plain table that ties to the source beats a pretty dashboard that doesn't. I polish after it's working, not before.

Habit 05
The tell

Watch someone else use it.

Half the bugs are column labels. If a director pauses on a header for three seconds, I rename it before they ask.

Eight data points, honestly labeled

The short version in numbers.

Pulled from transcripts, offer letters, and LinkedIn. All real.

0.00
GPA. Magna Cum Laude, UTA Information Systems, May 2025
0×
Consecutive Dean's List semesters, Fall 2021 through Spring 2025
0+
One on one tutoring sessions as a UTA Peer Educator
0+
Incoming students guided through UTA New Maverick Orientation
0+
Options trades journaled. Fourteen months, setup to outcome
0%
Onboarding time reduction at Lhoist via consolidated SOP template
0lang.
Hindi, English, Tamil, Gujarati, Marwari, and enough French to order dinner
0of 1000+
Selected for the Goolsby Leadership Academy Cohort 20
Five years, Arlington to East Hanover

How I got here.

The resume has the job titles. This has the rest.

2021

Started at UTA. Paid my own way.

Information Systems. Dean's List first semester. Maverick Academic Scholarship. First time living outside Texas.

Arlington, TX
2022

Resident Assistant. Running a building.

Campus Living Villages. Twenty five percent occupancy bump in one leasing cycle. First time I ran ops for something with real stakes.

Housing · Leasing · Ops
2023

Goldman Fellows · Stockholm · BGS officer.

Too much at once, in a good way. Studied Comparative Economic Systems and Healthcare Economics at DIS Stockholm. Goldman Sachs Fellows. Took over BGS social at UTA.

Dallas → Stockholm → Arlington
2024

UTA Event Ops. Promoted twice. Started trading.

Setup Crew to Crew Lead to Event Personnel in eleven months. Also the year I got serious about options. First hundred trades mostly taught me what not to do.

E.H. Hereford UC
2025

Graduated. Then Lhoist. Extended twice.

Magna Cum Laude. Eight straight Dean's List semesters. Lhoist internship in La Porte, TX was supposed to be four months. Ran eight. They kept finding projects. I kept showing up.

UTA · La Porte, TX
2026

Pharmvista. QA Analyst to Jr. BSA.

Moved to New Jersey in January. Started in QA. Three months in, picked up the Junior BSA title and a wider remit. First real career job.

East Hanover, NJ
Short version
Harsh Marlecha
Arlington → E. Hanover

Trying to be the analyst they trust on the third phone call.

B.S. in Information Systems from UT Arlington. 3.83 GPA, eight consecutive Dean's List semesters, Beta Gamma Sigma, Goolsby Leadership Academy Cohort 20. I put these on the resume because they're honest signal.

Currently a Junior BSA at Pharmvista Inc., a contract gummy manufacturer in East Hanover. My day is interviews, requirements, workflow maps, and writing down what the business actually does so the systems team has something to build to.

Outside work: five hundred plus logged options trades over fourteen months. The trading taught me to keep a journal, define the rule before the trade, and sit on my hands most of the time. Turns out that's useful at the analyst desk too.

Education
B.S. Information Systems, UTA · 3.83 GPA
Honors
Magna Cum Laude · Dean's List ×8
Societies
Beta Gamma Sigma · Goolsby Cohort 20
Languages
Hindi · English · Tamil · Gujarati
Based
East Hanover, NJ
Open to
BA, BSA, Analytics · any location

Tools I reach for every day.

01
SQL
Joins, window functions, CTEs. Readable first, fast second.
02
Python
pandas and numpy. Cleaning, reconciling, one off analyses.
03
Power BI
DAX, data models, dashboards people open more than once.
04
Excel
XLOOKUP, PivotTables, models that don't break when someone else opens them.
05
21 CFR 111
Batch records, CoA review, deviation docs from the Pharmvista QA months.
06
Process mapping
Current state, target state, gaps. The rewrite nobody volunteers for.
07
Pine Script
TradingView indicators and strategies for the options work.
08
Writing
SOPs, one pagers, explainers. Half the job is written.
01 / 09 Pharmvista · Junior BSA
Current roleJunior Business Systems Analyst · B2B contract manufacturing

Making the business legible.

Company
Pharmvista Inc.
Role
Junior BSA
Dates
Since Apr 2026
Cadence
Tue · Wed · Thu
Fig. 01 · Business process map, month one
QUOTE TO SHIP · CONTRACT MANUFACTURING FLOW Inquiry MAPPED Sample MAPPED Quote PARTIAL Formulation MAPPED Batch run GAP Release PARTIAL Ship MAP AI TOOL FOOTPRINT · DEPARTMENTS × USAGE R&D ChatGPT Claude Notion AI Procurement ChatGPT Shadow Receiving Excel only Production ERP Gap Sales HubSpot Claude Marketing Canva GA4 MONTH ONE · OUTPUTS 6231434 DEPARTMENTSPROCESSESAI TOOLSWEB FINDINGS DTC requirements doc 22 PG · READY FOR VENDOR TALKS MAPPED PARTIAL GAP
Process map · current statePharmvista · month one

Five assignments, five stakeholder groups.

Pharmvista makes gummies for other brands. B2B. Fast growth. The systems, the processes, and the digital footprint drifted apart while everyone was shipping. The CEO handed me five open assignments on day one as Junior BSA: audit the website, map the AI tool footprint across departments, scope the DTC channel, define an SEO strategy, rebuild the social presence. Each runs through a different stakeholder group. Each needs the same BSA playbook.

The playbook I run on every assignment.

01
Identify every stakeholder.
Not just the person who raised the ask. Business owner, process owner, system owner, IT, and whoever signs off on spend. Miss one and the requirements are wrong before I start.
02
Interview to capture current state.
Thirty minutes each. How do you do it today. Where does it break. What would you change. I write without editing until they stop. Then I play it back in my own words to check I heard it right.
03
Document gaps and requirements.
Current state diagram. Target state diagram. Functional and non functional requirements. Acceptance criteria. One Notion page per assignment, same structure every time, so a stakeholder reading their third can navigate the fourth without a tour.
04
Work with the systems team on options.
I don't build the solution. I bring requirements to the dev team and to the CEO and we cost out options together. My job is to make sure the options we evaluate actually address the business need, not a prettier version of the status quo.
05
Run UAT before sign off.
Every change gets tested against the acceptance criteria by the stakeholder who owns it, not by me and not by the dev who built it. If the stakeholder can't use it on their own, it doesn't ship.
A Junior BSA's job isn't to have the answer. It's to be the person both sides trust to write down the question.

Five deliverables across five assignments.

01
Stakeholder map · 6 departments, 23 processes.
Named owners for every process from inquiry to ship. Who owns it. Who consumes its output. Who signs off on changes.
02
AI tool footprint · 14 tools cataloged.
Department, function, tool, owner, status. Surfaced three shadow deployments, two duplicate spend cases, one procurement gap.
03
pharmvista.com audit · 34 findings.
Eleven quick wins shipped in month one. Remaining twenty three prioritized by effort and impact for Q3.
04
DTC requirements doc · 22 page BRD.
Tech stack options scored. Unit economics modeled. Operational impact on the B2B side documented. Vendor conversations open.
05
Notion documentation backbone.
One workspace. One naming convention. One page template per assignment type. Adopted company wide.

What leadership can point to.

6 · 23
Departments mapped across twenty three distinct processes
14
AI tools cataloged by department, function, owner, and status
34 · 11
Website audit findings. Eleven shipped as quick wins
22 pg
DTC channel requirements doc, ready for vendor conversations

The interview is the job.

The analysis is the easy half. The hard half is the conversation that produces the inputs for it. If the stakeholder interview goes sideways, nothing I put in Notion three days later will save it. So I've gotten better at interviews than at analysis, which is not the order I expected.

The other thing: write requirements in language the stakeholder can read back to you without pausing. If they have to translate as they read, you wrote it for yourself, not for them.

Notion
Miro
Power BI
Excel
Figma
SEMrush
Jira
Loom
Next case · 02 / 09
Lhoist SOP consolidation
continue ↗
02 / 09 Lhoist · SOP consolidation
InternshipIndustrial ops · terminal safety

One template everyone hated equally.

Company
Lhoist North America
Role
Terminal Safety Project Analyst
Dates
May to Dec 2025 (extended 2×)
Location
La Porte, TX · multi site
Fig. 01 · Before / After: SOP structure across sites
BEFORE · 4 SITES, 4 SOPs SITE A · 34 pg SITE B · 28 pg SITE C · 41 pg SITE D · 37 pg consolidated AFTER · 1 TEMPLATE + APPENDICES APPX A · SITE SPECIFIC APPX B · GATE ENTRY 22 pg core + appx −20%4 → 1+2 sites ONBOARDING TIMEDOCS TO MAINTAININTERNSHIP EXTENSIONSADOPTED ELSEWHERE
SOP consolidation · before and afterLhoist · La Porte

Same work. Multiple ways of writing it down.

Lhoist operates multiple terminals across the southern U.S. Each had its own version of the same basic safety SOP. Gate entry, lockout tagout, spill response, emergency shutdown. Site managers had diverged over years. New hires had to re learn at each site. Audit prep was brutal.

The content wasn't the problem. The divergence was. If Site A said "within 50 feet" and Site B said "within 15 meters," both were correct, both were defensible, and the cognitive overhead of tracking which one applied where was costing us time nobody accounted for.

Mapped the overlap. Consolidated. Shipped ugly first.

01
Pulled all four SOPs apart, section by section.
Dropped every version into Excel with a column per site and a row per SOP section. It made the overlap obvious and the differences obvious.
02
Picked the best language per section.
Not the longest, not the site manager's favorite. The clearest. Sometimes that was Site C's version. Sometimes Site A's. The winner got merged into the master.
03
Consolidated into one template plus appendices.
The common seventy percent lived in the main doc. The thirty percent that genuinely varied by site (permits, specific equipment, local authorities) went into appendices. Core SOP went from roughly 35 pages average to 22 pages.
04
Built the Power BI dashboard behind it.
Tracked SOP compliance across sites. When a site fell behind on training acknowledgments, it showed up. Gave safety leadership one view for cross site reporting.
05
Recorded short training videos for the unclear parts.
Clipchamp. Two minutes each. Embedded in the SharePoint page next to the SOP. Reading level comprehension isn't always the fastest path. Sometimes a thirty second clip is.
Analytics was thirty percent of the job. Change management was seventy percent. The content wasn't the hard part. Getting site managers to agree on one template was.

What actually changed.

−20%
Onboarding time, measured against prior cohort time to independent
4 → 1
SOP documents collapsed into a single template with site appendices
Internship extended, from four months to eight months
+2sites
Template adopted by two other Lhoist facilities beyond original scope

The lesson.

The first template I shipped was ignored for three weeks because I hadn't consulted the site managers who would have to use it. They weren't wrong to push back. I hadn't earned their buy in. The second version, which was worse on paper, got adopted fast because they'd been in the room while I built it.

That's the lesson I still carry into Pharmvista. The analysis is the smaller half. The human work around the analysis is the bigger half.

Excel
Power BI · DAX
SharePoint
Clipchamp
Microsoft Teams
Next case · 03 / 09
UTA Event Ops
continue ↗
03 / 09 UTA · Event Operations
Part timeCampus operations · Jun 2024 to May 2025

Setup Crew to Event Personnel in eleven months.

Organization
UTA University Center
Roles
Setup → Lead → Personnel
Dates
Jun 2024 to May 2025
Events
36+ delivered
Fig. 01 · Event floor plan · 350 seat conference setup
stage · 40 ft BACKSTAGE GREEN RM AV / TECH BOOTH HM AISLE LIVE INCIDENT LOG · 2024–25 SEASON AV dropout, 4 min, patched local Vendor no show, seating re flowed Weather tent swap, pre event call 36+×215+ EVENTSPROMOTIONSCREW TRAINED
Floor plan · Fall 2024 career fairE.H. Hereford UC

Fifty person workshops to five hundred person conferences.

The E.H. Hereford University Center hosts conferences, concerts, career fairs, orientations, and banquets. Every event needs setup, AV, and live support. I worked this job for eleven months, through thirty six plus events, and got promoted twice.

The pay was average. The training was better than any class I took that year.

What changed with each promotion.

01
Setup Crew. Learned the floor.
Jun to Jul 2024. Chairs, tables, AV rigs, stage layouts. The first two weeks were mostly learning where things lived. Also learned that the difference between a good event and a bad one shows up in the ten minutes before doors open.
02
Crew Lead. Running the setup.
Jul 2024 to Jan 2025. Teams of fifteen plus. Trained new crew. Coordinated with event managers. My job became less about moving chairs and more about making sure other people moved the right ones in the right order.
03
Event Personnel. End to end ownership.
Jan to May 2025. Took client calls. Coordinated logistics from inquiry to strike. Resolved live issues during events. When something went wrong in front of five hundred people, the buck stopped at me. Which is a useful thing to learn at twenty one.
Event logistics is project management with an audience watching you do it. The crew that recovers fastest wins.

Things that transfer directly to the analyst job.

01
You can't predict every failure. You can predict the category.
AV dropouts, weather, vendor no shows. Those three were eighty percent of our incidents. Once I saw the category pattern, I stopped being surprised, which meant I had runway to respond instead of react.
02
A written setup list beats ten verbal briefings.
Verbal briefings drift. Written lists don't. I wrote the setup lists for my events, printed two copies, handed one to the crew lead, kept one on me. Same principle as SOPs. Same principle as requirements docs.
03
The crew that trusts you is the crew that ships.
Couldn't have put this cleanly a year ago. Know it now. Trust gets built in how you handle small things, not big ones. If you cover for your crew on a minor stumble, they cover for you on a major one.
04
Plans don't survive contact with the real world.
This is the analyst lesson too. A dashboard, a report, a requirements doc, none of them matter if nobody uses them when the pressure is on. The test is always live.

What shipped.

×2
Promotions in eleven months. Crew to Lead, Lead to Personnel
36+
Events delivered, from fifty person workshops to five hundred person conferences
15+
Crew members trained and led across the run
0
Major incidents I couldn't resolve live during an event
Clipboards
Walkie talkies
Google Sheets
WhatsApp groups
Pre event checklists

Simple tools, hard work. The tools are never the point.

Next case · 04 / 09
BGS social strategy
continue ↗
04 / 09 Beta Gamma Sigma · Social strategy
LeadershipHonor society · Aug 2023 to May 2025

Growing an honor society's social presence from inside it.

Organization
Beta Gamma Sigma · UTA
Role
Social Media Marketing Officer
Tenure
22 months
Platforms
IG · LinkedIn · Chapter page
Fig. 01 · Content calendar · one month, post cadence and engagement
OCT 2024 · CONTENT CALENDAR MONTUEWEDTHUFRISATSUN spotlight+247 event+184 spotlight+312 event+221 spotlight+289 news+62 event+198 spotlight+267 recap+156 spotlight+334 event+212 AVG ENGAGEMENT BY POST TYPE member spotlight289 avg event post193 avg news / announcement62 avg 4.6× people vs news
Content calendar + engagement by typeUTA BGS · Oct 2024

An honor society with a dormant social presence.

Beta Gamma Sigma is the international honor society for business programs. UTA's chapter has a couple hundred active members across graduating years. When I took over as Social Media Marketing Officer in August 2023, the chapter's social presence was mostly dormant. A few posts a semester. No content calendar. No voice. No data feedback loop.

I kept the role for twenty two months, through my own induction, through the 2024 cohort, through the 2025 graduation. Long enough to see what worked and what didn't, and long enough to document it for whoever came next.

Four moves. None of them clever.

01
Set a content calendar built around members.
Weekly member spotlight. Monthly achievement roundup. Events as they happened. People forward, not organization forward. The difference matters more than it sounds.
02
Built a visual system in Canva.
Eight templates that anyone after me could maintain. Consistent type, color, spacing. Three accent colors, not ten. The goal was to make it boring to diverge, so my successor wouldn't.
03
Analyzed engagement by post type.
People forward posts outperformed organization news 4.6 to 1 on average. Once I had the number, the calendar shifted accordingly. Should've been obvious. Wasn't, because nobody had counted.
04
Cross posted with intent.
Instagram for current students. LinkedIn for alumni and recruiters. The formal chapter page for official comms. Same content, three different edits. The crime is copy pasting the IG caption into LinkedIn and calling it a post.
Social isn't about the organization. It's about the people in it. The day I stopped posting about BGS and started posting about the members, engagement doubled.
2 → 12
Monthly post cadence, from two posts a month to twelve
4.6×
Member spotlights outperformed organization news on engagement
2
Chapter events sold out through social alone, no email blast
22 mo
Continuous tenure. Handed off a documented playbook to the next officer

The analytics were the easy part.

The hard part was getting members to send me the material I needed to celebrate them. A member spotlight only works if the member gives you a quote, a photo, and an opinion about their own achievement. Half my time was chasing those. Engagement was downstream of operations, which is a lesson that applies to basically every data job I'll ever have.

The other lesson: a handover document that nobody reads is a handover document that doesn't exist. I wrote mine with screenshots. Twelve pages. The next officer actually used it.

Canva
Instagram Insights
LinkedIn Analytics
Notion
Later (scheduling)
Next case · 05 / 09
Options trading system
continue ↗
05 / 09 Options · Systematic trading
Personal projectSystematic trading · fourteen months in

Building a rule based trading system from scratch.

Type
Personal · not live sized
Framework
ICT / price action
Dates
Since Feb 2024
Instruments
NQ · ES · SPX options
Fig. 01 · NQ setup: killzone, liquidity sweep, FVG entry
NY OPEN KILLZONE BSL · buy side liquidity SSL · sell side liquidity FVG liquidity sweep ↑ → entry on FVG retest SETUPNY open · long R:R PLAN1 : 3.0 SLbelow FVG low RESULT+2.8R
Journal entry · sanitizedTradingView · paper

Not a side hustle. A training ground for systematic thinking.

People ask me why an analyst is trading options. The honest answer is that systematic trading and systematic analysis are the same muscle. Define the rule before the event. Collect evidence without lying to yourself. Size your conviction to your certainty, not the other way around. Keep a journal. Review it.

What financial media calls "trading" is ninety five percent guessing. I wasn't interested in that. I got interested in Inner Circle Trader's price action framework (liquidity, order blocks, fair value gaps, killzones) because it's rule based. You can write down the setup. You can backtest the setup. You can be wrong about the setup in a way that teaches you something.

Built over fourteen months.

01
Learned the vocabulary until I could explain it to a non trader.
Liquidity pools, order blocks, fair value gaps, killzones, displacement, market structure shifts. Six months of reading and watching before I took a single trade.
02
Coded Pine Script indicators to flag the setups.
FVG detection, killzone highlighting, session separation, liquidity line marking. If I can't code it, I can't trade it. Code forces me to be specific.
03
Paper traded NQ and ES for six months before anything real.
Mostly intraday, NY open killzone. Every trade logged in Notion: setup, entry rationale, stop, target, R multiple, screenshot, lesson. Five hundred plus entries.
04
Ran a monthly review on the journal, not the P&L.
Which setups actually pay? Which setups am I taking that I said I wouldn't? Am I moving stops? Am I sizing up after losers? The P&L is downstream. The behavior is the thing.
Most "alpha" is sitting on your hands eighty five percent of the time. If you can't write down the rule before the trade, you don't have a rule. You have a feeling.

Four things, plainly.

01
A thirty percent win rate at 3R beats a seventy percent win rate at 0.5R.
Expectancy is the math. Most retail traders optimize for feeling right. The system optimizes for being right on average.
02
The journal matters more than the indicator.
Every time I've been tempted to add a new indicator, the real problem has been that I ignored the rules I already had. The journal makes that visible.
03
Process beats prediction. Every time.
I don't need to be right about direction. I need a defined setup, a defined invalidation, a defined size, and the discipline to execute without editing mid trade. That's it.
04
Most of the skill is not taking trades.
The hardest part isn't analysis. It's sitting through the eighty five percent of sessions that don't offer a clean setup without forcing one. Business analyst translation: don't ship a dashboard without a question that needs it.

Paper, for now.

Fourteen months in. Five hundred plus logged trades. I'm not live sized yet. Two more quarters of consistent results on paper before I scale anything meaningful. The point isn't the P&L. The point is that running a system end to end, on my own time, with a journal I actually review, is the same skill I use every day as an analyst.

TradingView
Pine Script
Python · pandas
Notion
Backtesting
Next case · 06 / 09
SaaS cohort teardown
continue ↗
06 / 09 SaaS · Cohort teardown
IndependentWeekend writeup · 2025

Why "average churn" lies to you.

Type
Self initiated
Data
Synthetic SaaS subscription set
Output
4,200 word writeup
Time
~16 hrs, two weekends
Fig. 01 · Retention curves by monthly cohort
100%75%50%25%0% M0M1M2M3M6M9M12 enterprise mid market self serve avg the hidden problem ↑
Retention by segment · synthetic dataIndependent · 2025

A single number, hiding a brutal segment.

A SaaS company I'd been reading about reported healthy month over month churn averages. On paper, retention looked fine. In practice, everyone on the analytics side of that company knew their self serve segment was bleeding. The monthly average was healthy only because enterprise retention was carrying it.

I wanted to show exactly how that kind of average lies, how to break it, and what to watch instead. On synthetic data, because I don't work there.

01
Why cohort curves beat a single MRR average.
A worked example where the weighted monthly average looks flat while the self serve cohort is down forty percent at month three.
02
The LTV/CAC ratio trap at the segment level.
Blended LTV makes bad segments look profitable. Segmenting the ratio flips the entire acquisition recommendation for two of three channels.
03
Three metrics I'd actually put on the wall.
Net revenue retention by segment. Month three cohort retention. Contribution margin of the bottom cohort. Boring, but they'd have caught the problem six months earlier.
04
A reusable Python notebook.
pandas functions that take a subscription event log and spit out cohort retention, segmented NRR, and the three wall metrics. Open source.
An average is a single number pretending to be a distribution. When the distribution has a long tail, the average is a liar.
41%
Month three retention for the worst cohort, versus 78% for best
2.3×
LTV gap between top and bottom segment when calculated separately
6 mo
Earlier visibility than the single MRR average would have offered
3metrics
Would have replaced the blended dashboard I was critiquing

Two reasons. One, I wanted to prove to myself I could take a real analytical argument from question to conclusion on my own. Two, the blended average problem shows up everywhere, not just SaaS. QA data averages hide a failing line. Event attendance averages hide a dead vertical. The muscle transfers.

Python · pandas
SQL
Matplotlib
Jupyter
Next case · 07 / 09
SKU rationalization
continue ↗
07 / 09 SKU · Rationalization framework
IndependentFramework · 2025

How mid market CPG brands kill SKUs wrong.

Type
Self initiated
Output
Six step framework
Data
Public retail dataset
Audience
CPG ops & category managers
Fig. 01 · The mistake and the fix
THE MISTAKE · CUT BY REVENUE ONLY A B C D E ✗ F ✗ Low revenue SKUs cut → BUT: SKU E had 78% margin, F was gateway to the brand THE FIX · 3 AXIS SCORE MARGIN VELOCITY A B C D E F ← keep (high margin) ← cut BUBBLE SIZE = CANNIBALIZATION RISK (larger = safer to cut)
Decision matrix · three axesIndependent · 2025

Cutting by revenue alone kills the wrong SKUs.

Every few years a mid market CPG brand decides its catalog is too long. The CFO asks for a SKU cut. Someone ranks SKUs by revenue. The bottom twenty percent gets slated for discontinuation. Three months later, contribution margin has dropped. The brand manager can't explain why.

Usually it's because one of those "low revenue" SKUs had seventy eight percent margin, another was the only shelf SKU at the entry price point, and a third was the gateway product that got new buyers into the brand. Cutting by revenue is a rounding error optimized into a disaster.

Six steps. Boring on purpose.

01
Score every SKU on three axes.
Velocity (units per week), contribution margin percent, and shelf role (entry, core, premium, promo). Not revenue.
02
Map cannibalization across the lineup.
Which SKUs actually substitute for each other? Correlation of sell through at the store level tells you this faster than a consumer survey.
03
Segment by shelf role, not portfolio position.
Entry price point SKUs deserve different thresholds than core or premium. Mixing them into one ranking is how you kill the gateway.
04
Run the cut twice. Once on revenue, once on contribution.
The SKUs that show up on both lists are safe to discontinue. The ones that appear on only one are worth a conversation, not a decision.
05
Model the post cut scenario.
Assume seventy percent of displaced volume moves to the nearest substitute. Recalculate total margin. If it's negative, reconsider.
06
Write the one pager before you present.
One page. Top three SKUs to cut, one paragraph rationale each, post cut margin impact. If you can't fit it on one page, you don't have a recommendation yet.
Cutting by revenue is a rounding error optimized into a disaster. The wrong twenty percent can take twice as much margin with it.

Applied to a public retail dataset.

Ran the framework against an open CPG retail dataset covering twenty four months of sell through across forty two SKUs. The revenue ranked cut killed two SKUs that were each over seventy percent margin. The three axis framework kept those two and flagged two different SKUs as better candidates. Net modeled margin impact was plus 8.2 percent versus minus 3.1 percent. Small sample, but the pattern is the point.

Excel · Power BI
Pareto analysis
Contribution margin
Scenario modeling
Next case · 08 / 09
A returns portal BRD
loop ↗
08 / 09 Returns portal · BRD
IndependentBSA documentation · 2026

A full BRD for a B2B returns portal.

Type
Self initiated
Scenario
Mid market B2B e-commerce
Deliverable
28 page BRD
Time
~20 hrs, three weekends
Fig. 01 · BRD table of contents and a user story page
BRD · Returns Portal v1 · TOC 1.Executive summarypg 2 2.Problem & objectivespg 4 3.Stakeholder mappg 6 4.User personaspg 8 5.Current state flowpg 11 6.Target state flowpg 13 7.User stories & ACpg 15 8.Functional requirementspg 19 9.Non functional reqspg 22 10.Integration pointspg 24 11.Solution scorecardpg 26 12.Open questionspg 28 § 7.3 · Story US-03 · RMA creation AS Abuyer placing a return request I WANTto submit an RMA without calling CS SO THATI can track status and get a label fast ACCEPTANCE CRITERIA ✓ Buyer can log in with existing SSO ✓ System validates order within return window ✓ Buyer can select line items and reason codes ✓ RMA number generated and emailed within 30s ✓ Prepaid label attached for eligible SKUs ✓ Buyer can track status at any stage ✓ CS notified on high value returns (> $2,500) PRIORITY MUST STORY POINTS 8
BRD structure + sample user story pageIndependent · 2026

A problem I hear about every week.

Mid market B2B e-commerce brands almost always handle returns badly. Customer emails customer support. CS looks up the order manually. CS cuts an RMA in a spreadsheet. CS emails a shipping label as a PDF. Finance reconciles the refund three weeks later. Nobody knows where their return is at any given moment. Buyers call in. Volume scales. CS drowns.

I picked this scenario because it's real, common, and the fix is unambiguous: a self serve portal. What's not unambiguous is what the portal actually needs to do, which is what a BSA gets paid to figure out before a line of code is written.

Twelve sections, twenty eight pages.

01
Stakeholder map with RACI.
Seven stakeholder groups. CS, ops, finance, IT, legal, sales, and the buyer. Each one scored Responsible, Accountable, Consulted, or Informed across ten decisions the portal touches.
02
Three user personas.
The buyer (procurement at a distributor). The CS rep (the current bottleneck). The finance reviewer (the person who eventually signs the credit note). Each persona has goals, pain points, and success metrics.
03
Current state and target state flows.
BPMN diagrams. The current state flow has 14 steps, 3 manual handoffs, and 2 places where the return silently dies. The target state flow has 7 steps, 0 manual handoffs for the 80% case, and explicit failure paths for the 20%.
04
Twelve user stories with full acceptance criteria.
Written AS A / I WANT / SO THAT. Each one has six to eight acceptance criteria written in pass or fail language, so QA and the dev team don't have to interpret.
05
Functional, non functional, and integration requirements.
Auth (SSO with existing IdP). Performance (RMA generated in under 30 seconds). Integrations (ERP, carrier API, email provider, notification bus). Separated so an architect can read them without wading through the user story layer.
06
A vendor neutral solution scorecard.
The BRD doesn't pick a vendor. It lists the criteria a build versus buy versus customize decision would be scored on, so the team running the RFP doesn't have to invent that in week three.
A good BRD doesn't tell the dev team how to build it. It tells them what "done" looks like so they can decide how.

Writing a BRD for a fake company is harder than the real version.

At a real company you have actual stakeholders to interview, real tickets to read, real metrics to anchor to. Writing it for a hypothetical meant I had to decide what was realistic, defend it to myself, and not lean on made up numbers to paper over the gaps. The document is tighter as a result. Every figure in it is either a range with a source, or flagged as an open question.

Biggest lesson: the hardest section was not the requirements. It was the open questions section at the end. A BSA who doesn't track what they don't know isn't a BSA. They're a note taker.

Google Docs
Lucidchart · BPMN
Figma (wireframes)
Notion
User stories & AC
Next case · 09 / 09
ERP vendor selection
continue ↗
09 / 09 ERP · Vendor selection
IndependentVendor evaluation · 2026

Replacing a legacy ERP without the chaos.

Type
Self initiated
Scenario
Mid market manufacturer
Vendors
4 evaluated
Output
Scorecard + recommendation memo
Fig. 01 · Vendor scoring matrix, 25 criteria across 5 categories
CATEGORYWEIGHT NETSUITE SAP B1 MS DYNAMICS ACUMATICA WINNER Core functional fit35% Integrations20% Total cost of ownership25% Implementation risk15% Vendor stability5% 4.2 4.5 3.1 4.0 2.8 2.3 3.4 2.9 3.8 3.2 3.0 2.8 3.3 3.5 4.1 4.4 4.6 4.5 4.5 3.5 Weighted total 3.9 3.5 3.4 3.5 RECOMMEND NetSuite wins on weighted core fit + integrations
Vendor scoring matrix · 25 criteria across 5 categoriesIndependent · 2026

A $40M manufacturer on a fifteen year old ERP.

The hypothetical: a mid market manufacturer, forty million in revenue, still running a fifteen year old on premise ERP their IT team has patched so many times nobody remembers which modules are OEM anymore. CFO wants to move to modern cloud ERP. Options on the table are NetSuite, SAP Business One, Microsoft Dynamics 365 Business Central, and Acumatica.

The CFO asks for a recommendation. IT wants integrations. Ops wants manufacturing features. Finance wants revenue recognition. The BSA job is to not pick a favorite. It's to build the scoring framework, run the framework honestly, and hand the leadership team a decision they can defend at the board.

Five weighted categories. Twenty five criteria.

01
Core functional fit · 35% weight.
Manufacturing BOMs. Lot traceability. Batch release workflows. Inventory costing methods. Multi warehouse. Five sub criteria, each scored 1 to 5 against demo evidence.
02
Integrations · 20% weight.
CRM, EDI, bank feeds, third party logistics, QMS. How many are native. How many need middleware. How many are custom. This is where paper promises die.
03
Total cost of ownership · 25% weight.
License, implementation, customization, annual maintenance, required infrastructure. Five years modeled. TCO is always bigger than the sticker. Always.
04
Implementation risk · 15% weight.
Vendor partner quality in the region. Reference customer calls. Typical time to go live for a business our size. How opinionated the system is (more opinionated = less custom work, more behavior change).
05
Vendor stability · 5% weight.
Parent company health. Product roadmap transparency. How likely this vendor will still be meaningfully investing in this product in five years. A small weight, but not zero, because switching ERP again would be a crisis.
A good scorecard doesn't pick the vendor for you. It makes the tradeoffs legible, so the humans can decide and own the decision.

NetSuite, conditionally. Acumatica is the live runner up.

NetSuite comes out on top of the weighted score (3.9) on the strength of core functional fit and integration breadth. But the recommendation isn't unconditional. Two conditions are attached: the partner must be the named regional implementation firm (references checked), and the cutover plan must phase manufacturing first, finance second, to de-risk revenue recognition.

Acumatica is the live runner up (3.5). If the board prioritizes implementation risk more heavily than I did (say 25% weight instead of 15%), Acumatica wins on math. That's explicit in the recommendation memo, not buried, because a BSA who buries the sensitivity analysis is lying to the decision makers.

Three lessons for the next vendor eval.

01
Weights are the whole game.
Most of the argument is about what matters, not what scored highest. The scorecard gets done in a weekend. Agreeing on the weights takes three meetings. Plan accordingly.
02
Demo scripts beat demo decks.
Vendors will always look good in their own demo. I wrote a five scenario demo script the vendor had to run live, including the scenario they don't want to show. That's where the real differences came out.
03
Reference calls are interviews.
A reference call run by a BSA who's done it before gets you three things the vendor's deck won't: what broke, what cost double, and who left the implementation team halfway through.
Excel · weighted scorecard
Demo scripts
TCO modeling
RACI matrix
Reference call playbook
Back to the start · 01 / 09
Pharmvista Junior BSA
loop ↗