Guide: Cost & Effort
Estimate integration effort realistically and scale in phases
Why this guide?
Not every integration has the same effort profile. In practice, these factors usually drive complexity:
- very large data estates (e.g. large SharePoint landscapes)
- complex ERP structures (e.g. SAP with very high table counts)
- strict on-prem/security constraints
Effort matrix (S / M / L)
| Class | Typical approach | Time-to-value | Main drivers |
|---|---|---|---|
S | native connector, small scope | very fast | low complexity |
M | export + Dataset Manager + limited connectivity | medium | preparation and scope definition |
L | custom MCP + large data + strict security | higher | architecture, governance, operations |
Where effort usually increases
1) Large SharePoint estates
When many sites/files must be connected, effort grows in:
- scope definition (what to onboard first)
- data quality cleanup (duplicates, outdated content, noise)
- retrieval strategy (avoid treating everything equally too early)
2) SAP / large ERP landscapes
With very high table counts, full live integration is rarely a good starting point.
Pragmatic path:
- choose use-case-driven table subsets
- start with a small table scope
- expand only when value is proven
Effort levers (what actually helps)
- keep scope small: start with 1-2 high-value use cases
- read-only first: add write depth later
- export-first if uncertain: validate quickly before deep build
- assign owners clearly: business + IT + integration owner
Recommended phases
Pilot: one use case, one data scope, fast validationStabilization: permissions, monitoring, data qualityScale: additional scopes/systems, deeper integrations
Quick estimation (5 questions)
If 3+ answers are yes, you are likely in M/L:
- strict on-prem/security requirements?
- very large data volumes?
- no native connector available?
- write actions required in target systems?
- inconsistent data quality today?