The Architecture of abhinavflac
Most portfolio write-ups stop at the surface: typography, animations, page transitions, and maybe a screenshot of the homepage. That is useful, but incomplete. abhinavflac is not just a visual shell. It is a content system, a live data layer, a profile application, an authentication product, a comment platform, a media pipeline, and a protected contact workflow wrapped inside a personal site.
This post is a complete architectural walkthrough of the whole system, not only the parts that are visually obvious and not only the features that were edited recently. I want it to read the way I would want to read someone else's system notes: practical, honest, and specific enough that another engineer could understand the boundaries, the tradeoffs, and the shape of the codebase.
What The Site Actually Is
At a high level, the platform has three responsibilities:
- Publish content about my work and thinking.
- Deliver a strong visual experience that feels designed rather than templated.
- Support real interaction: auth, profiles, comments, saved posts, uploads, and protected contact messaging.
That means the architecture has to do two things at once:
- behave like a mostly static content site when possible
- behave like a small product when state, identity, or protection are required
The resulting split is a good one: the frontend optimizes for presentation and flow, while the backend owns persistent state, identity, verification, and storage.
In practical terms, the system fans out like this:
| Layer | Primary job | What it owns in practice |
|---|---|---|
abhinavflac frontend | presentation and flow | static pages, animation, MDX content, route-level adapters, client-side state |
| FastAPI backend | protected application behavior | identity, persistent records, storage policy, protected workflows |
| shared infrastructure | support services beneath both apps | content assets, service adapters, database, object storage, identity, email, and security services |
The presentation layer and the application layer are intentionally separated, but they are clearly designed to work as one system rather than two unrelated apps.
System Boundaries
The current setup separates concerns like this:
| Boundary | Main role | Why the split matters |
|---|---|---|
| presentation layer | Next.js frontend, MDX content, animation, page composition, and data adapters close to the UI | Keeps experience code and editorial content moving together |
| application layer | FastAPI backend, auth, profiles, comments, saved posts, contact, and uploads | Keeps identity and persistent state isolated from presentation |
| shared services | database, object storage, identity providers, email delivery, and security checks | Keeps critical infrastructure policy outside the visual layer |
This boundary is important. A blog post should still render if the API is down. A project case study should still be indexable even if comments fail. On the other hand, saved posts, auth state, profile media, and contact verification should not be pushed into a static-only model just because the public site looks editorial.
The Frontend Shell
The frontend is built on the Next.js App Router. The root layout acts as the control room for the experience:
- global styles are loaded once
- providers wrap the tree
- the preloader receives a site asset manifest
- route asset prefetching starts early
- smooth scrolling and click effects wrap the visible experience
- the navigation is always present and theme-aware
- a grain overlay unifies the visual texture across pages
This is a subtle but meaningful design choice. Instead of each page owning its own startup logic, the system has a shared shell that coordinates startup, navigation, asset warming, and motion timing.
The layout is assembled in a deliberate order:
- The root application shell mounts the global providers and establishes shared state.
- Booking embeds and startup systems initialize early.
- The preloader receives the asset manifest and begins warming critical assets.
- The route prefetcher starts preparing likely next-route media.
- Smooth scrolling wraps the visible interface so navigation and page content move within one motion system.
- The navbar and the active page render inside that shell.
- The grain overlay sits over the final composition to keep the visual texture consistent.
The shell is not decorative. It is the part of the app that makes different pages feel like one authored experience.
Motion As Architecture, Not Ornament
There are several animation systems in the frontend, but they are not all doing the same job.
| Motion layer | Purpose |
|---|---|
| Preloader reveal | Controls the very first impression and synchronizes asset readiness |
| Section entrance animation | Gives blocks a readable reveal order |
| Scroll-linked motion | Lets titles, hero elements, and article surfaces respond to position |
| Route transition overlay | Makes article navigation feel continuous instead of abrupt |
| Dynamic navbar theming | Keeps the nav readable as backgrounds shift from light to dark |
One example I particularly like is the article transition link. Instead of navigating instantly from a blog list or project card to the article page, the site creates a temporary overlay with split panels and a loading label, animates it in, performs navigation, and then animates out. This creates continuity between source and destination without forcing the page itself to become over-animated.
Another good example is the navbar theme logic. It does not simply use a route-level light or dark variant. It probes the live DOM near the top of the screen, checks what kind of surface is underneath, and adapts so the nav remains legible.
That is a small detail, but it reveals the design philosophy of the whole project: motion and styling should respond to context, not just route names.
The Content Layer
The blog and work sections are built from local MDX files. This is one of the most important architectural choices in the project because it keeps editorial content in the same review and deployment path as the rest of the codebase.
The content pipeline follows the same shape for both blogs and projects:
| Source | Loader | Processing steps | Destination |
|---|---|---|---|
| blog MDX content | content loader | parse frontmatter, normalize article body, derive slug from content naming | blog article pages |
| project MDX content | content loader | parse frontmatter, normalize article body, derive slug from content naming | project article pages |
There are two notable parts here.
First, the app does not use a heavy MDX runtime. It uses lightweight content parsing utilities that extract frontmatter, normalize text, and strip scaffold markup before handing the article body to a custom renderer.
Second, blog posts and project case studies share the same long-form article shell. That means the typography, sidebar metadata, entrance choreography, cover media, save actions, link copying, and comment thread behavior all stay consistent across both kinds of long-form content.
That shared shell is a very good architectural decision. It avoids the common problem where blog pages and work pages start drifting into two separate design systems.
The MDX Renderer
The custom renderer deserves its own section because it explains a lot about how the site balances flexibility and control.
Instead of allowing arbitrary embedded React everywhere, the renderer supports a deliberate subset:
- headings
- paragraphs
- inline links, bold, italics, and code
- quotes
- ordered and unordered lists
- tables
- fenced code blocks
- images
- a small set of purpose-built content blocks for richer media where needed
That constraint is healthy. It gives me enough expressive power to write rich case studies, but it keeps the article surface stable and predictable. I do not need to design a bespoke layout for every post just because I can.
It also means the article format stays consistent. Instead of leaning on arbitrary embed logic, the post can express architecture through normal markdown structure: sections, tables, numbered flows, and short explanatory blocks.
The Homepage As A Composed System
The homepage is not one monolithic hero section. It is assembled from multiple content and motion modules: hero, signal strip, text showcases, latest sections, expansion imagery, partner copy, mountain transitions, and CTA surfaces.
The homepage itself pulls its latest work and latest writing from the content layer, not from hardcoded text inside the page composition. That matters because the homepage stays connected to the content system instead of becoming a manually maintained duplicate.
The signal strip is also worth calling out. It mixes:
- stat cycling
- animated social icons
- CTA routing
- visual synchronization with the preloader event
Even something that looks small at the top of the page is actually connected to the startup choreography and shared content data.
State On The Frontend
There are three meaningful state layers in the frontend.
| Layer | Responsibility |
|---|---|
| local component state | temporary UI state, animations, form state, tabs, notices |
| shared React context | auth, saved posts, public profile cache, live presence, weather, music |
| lightweight memory cache | short-lived API response reuse and stale-while-refresh behavior |
This is a nice middle ground. The site does not need a full client state framework, but it also does not pretend that a product with profiles, comments, saved posts, and live widgets can be handled cleanly with prop drilling alone.
Authenticated Session State
The authenticated session layer owns the current user and auth status. Its behavior is practical:
- initialize from
localStoragewhen possible - attempt a current-user check
- on unauthorized response, try the refresh token flow
- if refresh succeeds, retry the user fetch
- if refresh fails, clear tokens and reset state
The result is a user experience that feels persistent without requiring every page to re-implement auth recovery logic.
Ambient App Data
The second major layer combines data that is not strictly auth but still feels application-wide:
- presence data
- music
- weather
- workspace presence snapshots
- the authenticated user's public profile mirror
- the authenticated user's saved posts
This context is what gives the about/profile areas their sense of being alive instead of merely static.
In-Memory Cache
The in-memory cache is simple but effective. It stores values with an expiry timestamp and supports:
- writing a fresh entry
- reading a cached entry
- invalidating stale entries
- clearing cached state when needed
That cache is then used in places like auth user loading, profile data, comments, and saved posts. The pattern is consistent:
| Cache state | UI behavior | Network behavior |
|---|---|---|
| entry is fresh | use cached data immediately | skip fetch |
| entry is stale | render cached data first | refresh in the background |
| entry is missing | show loading or empty state | fetch from the network |
For a site like this, that is often more useful than adding a much heavier caching framework.
Profile Data And Identity Merging
The profile system is one of the more interesting parts of the frontend because it merges public and private data intelligently.
The public profile endpoint is what any visitor can request. If the logged-in user happens to be viewing their own profile, the frontend also fetches the private authenticated profile and merges the two.
That allows one route to serve two roles:
- public showcase for visitors
- management surface for the owner
The profile page then layers on:
- banner and avatar upload flows
- save/remove behavior for saved posts
- comments history presentation
- owner-only settings modal
- share actions
- role badges
- upload progress and notices
From the user's perspective it feels like one page. Underneath, it is a carefully stitched combination of public content, private state, optimistic UI, and cache-aware refresh behavior.
Presentation Adapters And Live External Data
The frontend also contains a small set of server-side adapters. These are not replacements for the main backend. They act as presentation-facing adapters for external data that belongs close to the UI.
| Adapter | Main purpose |
|---|---|
| content counts | count local blog and project entries for navigation labels |
| weather feed | fetch and normalize weather data |
| music feed | fetch recent listening data and enrich it with album metadata |
| album lookup | map a collection id to richer track data |
| workspace presence feed | fetch live workspace activity and fall back to cached snapshots |
This is a smart split of concerns.
The Python backend handles identity and persistent application data.
The frontend adapter layer handles presentation-adjacent external information:
- things the UI wants frequently
- things that may need small reshaping
- things that benefit from server-side secret usage or caching
The music adapter is a good example. It does more than proxy a listening feed. It also:
- normalizes song, artist, and album fields
- searches for richer album metadata
- scores possible matches
- upgrades generic artwork when possible
- returns a frontend-ready track object
This kind of adaptation keeps the UI components simple.
Workspace Presence And Resilience
The workspace activity system has a nice resilience pattern.
On the client side, the site listens to live presence data. On the server side, the workspace presence adapter also consults a cached snapshot when live data is missing.
That creates a layered fallback model:
| Priority | Source | Behavior |
|---|---|---|
| first | live workspace activity | serve immediately when current live presence exists |
| second | cached workspace snapshot | return the latest saved snapshot if live data is missing |
| final fallback | clean null state | keep the UI stable instead of faking stale activity indefinitely |
This is a good example of architectural maturity in a small product. Live features are allowed to fail gracefully instead of breaking the page's emotional tone.
Asset Loading, Preload Warming, And Perceived Performance
The preloader system is one of the most defining parts of the experience architecture.
At build time and route-composition time, the app creates an asset manifest. That manifest includes:
- critical fonts
- transition sprite assets
- homepage images
- about page avatar
- project cover media
- blog cover media
- route-group assets
The preloader then:
- warms preload links
- discovers already rendered assets
- loads images and videos
- waits for fonts
- updates progress UI
- dispatches
preloader-complete - lets page-level reveal animations begin
The startup handoff is straightforward:
- The manifest is prepared.
- Preload links are warmed.
- Critical assets and fonts are loaded.
- Progress reaches completion.
- The app dispatches
preloader-complete. - Page-level and section-level reveal animations begin.
After first load, the route asset prefetcher uses pathname matching to warm assets for the active route group. This makes the site feel more intentional on subsequent navigation, especially for heavy blog and project media.
In other words, loading is treated as part of the design, not just a technical waiting period.
The Frontend API Contract
The frontend talks to the backend through a typed client API layer. That layer does a surprising amount of heavy lifting:
- injects bearer tokens
- includes credentials
- handles JSON serialization
- times out slow requests
- retries protected requests after refresh when appropriate
- stores refreshed tokens
- normalizes API errors
- exposes logical resource clients for auth, users, comments, saved posts, and contact workflows
This is a useful abstraction because it prevents token, timeout, and error logic from leaking into every UI component.
The popup helper for OAuth also lives here. That matters because both login/signup flows and account-linking flows need consistent popup behavior.
The Backend Platform Layer
The FastAPI backend is small, but it is organized like an actual service rather than a collection of unrelated endpoints.
The platform layer includes:
- configuration via
pydantic-settings - SQLAlchemy engine and session management
- connection pool tuning
- CORS configuration for allowed frontend origins
- SlowAPI rate limiting
- environment-sensitive docs and OpenAPI exposure
- a global exception handler with safe production responses
- a lifespan hook that disposes the database engine cleanly
The backend's job is not to look impressive. Its job is to be predictable, safe, and clear about where policy lives.
Data Model Design
The data model is compact and focused.
| Model | Main purpose |
|---|---|
User | core identity, role, profile fields, session versioning, soft delete flags |
| provider identity | provider-specific login links for email, Google, and GitHub |
| verification session | opaque OTP sessions for registration and password reset |
Comment | threaded discussion attached to a slug and content type |
Reaction | per-user reactions on comments |
SavedPost | per-user saved blog or project entry |
Two design choices stand out here.
First, content references use a stable content identifier rather than direct foreign keys into a content table. That works because blog and project content live as authored files, not as database rows.
Second, OTP flows are backed by server-owned verification sessions rather than embedding all session state in the client. That allows registration and reset flows to use opaque session IDs with attempt counters and expiration timestamps.
The Authentication Model
The auth layer supports:
- email registration with OTP verification
- email login
- public reactivation after soft delete
- refresh tokens
- logout by token version invalidation
- forgot password and reset-password flows
- Google OAuth
- GitHub OAuth
- provider linking and unlinking
This is more than enough for a portfolio app, and that is exactly why the structure matters.
Email Registration
The registration flow is intentionally server-owned:
| Step | Actor | Action | Result |
|---|---|---|---|
| 1 | frontend | submit registration request | backend receives email, username, and password candidate |
| 2 | backend | create a verification session | email, username, hashed password, hashed OTP, expiry window, and attempt budget are stored server-side |
| 3 | backend | send verification email | user receives OTP |
| 4 | user + frontend | submit OTP | backend verifies the session and code |
| 5 | backend | finalize account creation | core user and email identity records are created |
This is safer and more controllable than a pure client-side staged signup flow.
OAuth Login
OAuth is handled carefully. The backend signs and timestamps state values, validates freshness, and uses a short-lived exchange code so API bearer tokens are not exposed directly in the redirect URL.
| Step | Actor | Action | Result |
|---|---|---|---|
| 1 | frontend | open provider auth flow | backend prepares provider redirect |
| 2 | backend | build signed and timestamped OAuth state | callback can later be validated for freshness and integrity |
| 3 | provider | redirect back to backend callback | backend receives provider response |
| 4 | backend | validate state and resolve user | login or link target is safely identified |
| 5 | backend | issue a short-lived exchange code | long-lived API tokens stay out of the URL |
| 6 | frontend | exchange that code through the callback surface | frontend receives the real token pair |
This is a much better flow than passing long-lived tokens around in the query string.
Token Versioning
The system uses session versioning. Tokens include that version, and the auth middleware rejects tokens whose version no longer matches.
That provides clean invalidation for:
- logout
- permanent password reset
- deactivation/reactivation
- anonymization events
At the same time, the password-change flow for an already-authenticated user is careful not to throw them out unnecessarily. It updates the password and returns a fresh token pair while keeping the active session stable.
That distinction matters. Security should be strict, but it should also be sensible.
Account Linking And Provider Identity
The provider identity model lets a single user connect multiple providers. That powers:
- email/password login
- Google login
- GitHub login
- popup-based provider linking from profile settings
The linking flow is especially important because it shows how the frontend and backend cooperate well.
- the frontend requests a provider-specific auth URL from the backend
- the auth flow happens inside a popup
- the backend redirects linking results to the frontend auth callback surface
- the popup posts a message back to the opener
- the settings modal refreshes state without reloading the main page
That is a clean, product-friendly pattern.
User Profiles, Media, And Account Lifecycle
The user/profile layer goes beyond simple CRUD.
Public Profile Assembly
The backend profile service returns a public profile plus computed activity data:
- comment count
- recent comments
- password presence flags where appropriate
It also performs lazy deletion checks, so an account whose scheduled deletion window has passed can be anonymized during access rather than relying only on background cleanup.
Media Uploads
Avatar and banner uploads use direct-to-object-storage presigned URLs. The flow is:
| Step | Actor | Action | Result |
|---|---|---|---|
| 1 | frontend | request presigned upload | backend prepares upload contract |
| 2 | backend | return upload URL, storage key, and confirm token | browser has everything needed for direct upload |
| 3 | browser | upload file directly to object storage | backend bandwidth is bypassed for the file body |
| 4 | frontend | confirm upload with backend | backend validates the trusted upload details |
| 5 | backend | persist final media URL on the user record | avatar or banner becomes part of the profile |
The confirm token binds the upload to:
- the user
- the upload key
- the upload purpose (
avatarorbanner)
That keeps uploads efficient without turning object storage into an unaudited write surface.
Deactivation And Anonymization
Account deletion is modeled as a soft-delete grace period rather than immediate destruction.
- deactivation marks the account inactive
- a scheduled deletion timestamp is set for the grace window
- token version increments to invalidate active sessions
- reactivation can restore access within the grace window
- once expired, anonymization wipes PII, media, reactions, and saved-post state
This is a thoughtful lifecycle model. It protects users from accidental loss while still allowing eventual cleanup and privacy enforcement.
Comments, Replies, Reactions, And Saved Posts
The community layer is deliberately scoped, but well designed.
Comments
Comments are stored against stable content identifiers. Replies are nested through parent relationships. The backend fetches top-level comments, then walks descendants in bounded batches so deep or active threads cannot explode memory usage.
The frontend comment section adds:
- optimistic posting
- short-lived cache reuse
- stale background refresh
- recent-comment updates into profile context
That combination makes the thread feel responsive without making the data model fragile.
Reactions
Reactions are scoped per user, per comment, per reaction type. The model enforces uniqueness so one user cannot spam the same reaction repeatedly.
Saved Posts
Saved posts use a lightweight model keyed by:
- user id
- content identifier
- content type
The frontend again mirrors the backend well:
- fetch status per article
- fetch full saved list for the owner
- optimistically remove when unsaving
- keep owner-only visibility on the profile page
This is a nice pattern for private reader utility features on top of static editorial content.
Contact Form And Protected Messaging
The contact system is more sophisticated than a plain email link.
On the frontend, the contact page includes:
- help-topic selection
- structured field validation
- a Turnstile widget when a site key is present
- submit, success, and failure states
On the backend:
- Turnstile tokens are verified server-side
- remote IP is forwarded when possible
- hostname restrictions can be enforced
- errors are translated into user-readable messages
- successful submissions produce a styled email notification
The protected messaging flow is:
| Step | Actor | Action | Result |
|---|---|---|---|
| 1 | frontend | collect message fields and Turnstile token | request is prepared with both content and protection proof |
| 2 | frontend | send payload to the contact endpoint | backend receives the submission |
| 3 | backend | verify the Turnstile token | invalid, expired, or misconfigured checks are rejected clearly |
| 4 | backend | format successful submissions into email | a styled contact notification is produced and sent |
This is a good example of where a portfolio site becomes a real product surface. Once strangers can send input, protection, messaging quality, and failure handling all matter.
Mail, OTP, And Operational Messaging
The backend mail service handles three different classes of outbound email:
- contact notifications
- registration OTP emails
- password reset OTP emails
The contact email has a more branded HTML layout, while registration and reset flows use simpler utility-first templates. That separation makes sense:
- contact mail is for me as the receiver
- OTP mail is for users who need fast clarity over design complexity
This is a subtle but correct prioritization of communication goals.
Backend Policies, Caching, And HTTP Behavior
The codebase also uses HTTP semantics quite intentionally.
- public profile responses use controlled browser and edge caching
- comments and saved-post status responses are
no-store - weather uses timed revalidation
- music and workspace data stay dynamic where needed
- frontend content pages are statically generated when possible
This mix matters. Not all data should be treated the same:
- authored content can be mostly static
- live presence needs fresh reads
- authenticated state should not be cached recklessly
- external data can often tolerate controlled revalidation
This is exactly the kind of judgment that separates a clean architecture from a pile of working endpoints.
Strengths Of The Current Architecture
Several things are working especially well together here.
| Strength | Why it matters |
|---|---|
| static content + dynamic interaction split | keeps public pages fast without sacrificing product features |
| shared article shell for blog and work | reduces UI drift and duplication |
| typed frontend API layer | centralizes auth, refresh, timeout, and error behavior |
| small but useful caching strategy | improves responsiveness without a complex state stack |
| popup-based OAuth linking | avoids awkward full-page reloads in settings flows |
| presigned media uploads | keeps backend bandwidth and storage trust under control |
| graceful fallback for live data | keeps the site feeling stable even when external sources wobble |
Current Tradeoffs And Constraints
No architecture is perfect, and this one has conscious constraints.
- The custom MDX renderer keeps content controlled, but it also limits richer embedded visualizations unless new components are added.
- The lightweight cache is easy to reason about, but it does not replace full query invalidation tooling if the app grows much further.
- Comments are intentionally scoped and simple; moderation and abuse tooling are still minimal.
- The presentation adapters that normalize external data are helpful, but they also mean some presentation logic now lives server-side near the frontend.
- The system is carefully split, but that also means frontend and backend changes need coordination when auth or profile flows evolve.
These are reasonable tradeoffs for the current product size. They are signs of deliberate scope, not negligence.
Final Perspective
abhinavflac works because it respects the difference between what should be static, what should be dynamic, and what should be protected.
The static side gives the site clarity:
- MDX content
- generated routes
- shared article surfaces
- predictable page composition
The dynamic side gives it life:
- music
- presence
- workspace activity
- weather
- comments
- saved posts
- profile updates
The protected side gives it trust:
- OTP sessions
- token versioning
- OAuth state signing
- upload confirmation tokens
- Turnstile verification
- account lifecycle enforcement
That balance is the real architecture.
The visual layer is what people notice first. The system boundaries are what make the whole thing hold together after that first impression is over.

Sign in to join the discussion.