Loading Assets
0%
HOMEWORK0BLOG0ABOUTFAQ
BACK TO BLOG

The Architecture of abhinavflac

The Architecture of abhinavflac

The Architecture of abhinavflac

Most portfolio write-ups stop at the surface: typography, animations, page transitions, and maybe a screenshot of the homepage. That is useful, but incomplete. abhinavflac is not just a visual shell. It is a content system, a live data layer, a profile application, an authentication product, a comment platform, a media pipeline, and a protected contact workflow wrapped inside a personal site.

This post is a complete architectural walkthrough of the whole system, not only the parts that are visually obvious and not only the features that were edited recently. I want it to read the way I would want to read someone else's system notes: practical, honest, and specific enough that another engineer could understand the boundaries, the tradeoffs, and the shape of the codebase.

What The Site Actually Is

At a high level, the platform has three responsibilities:

  1. Publish content about my work and thinking.
  2. Deliver a strong visual experience that feels designed rather than templated.
  3. Support real interaction: auth, profiles, comments, saved posts, uploads, and protected contact messaging.

That means the architecture has to do two things at once:

  • behave like a mostly static content site when possible
  • behave like a small product when state, identity, or protection are required

The resulting split is a good one: the frontend optimizes for presentation and flow, while the backend owns persistent state, identity, verification, and storage.

In practical terms, the system fans out like this:

LayerPrimary jobWhat it owns in practice
abhinavflac frontendpresentation and flowstatic pages, animation, MDX content, route-level adapters, client-side state
FastAPI backendprotected application behavioridentity, persistent records, storage policy, protected workflows
shared infrastructuresupport services beneath both appscontent assets, service adapters, database, object storage, identity, email, and security services

The presentation layer and the application layer are intentionally separated, but they are clearly designed to work as one system rather than two unrelated apps.

System Boundaries

The current setup separates concerns like this:

BoundaryMain roleWhy the split matters
presentation layerNext.js frontend, MDX content, animation, page composition, and data adapters close to the UIKeeps experience code and editorial content moving together
application layerFastAPI backend, auth, profiles, comments, saved posts, contact, and uploadsKeeps identity and persistent state isolated from presentation
shared servicesdatabase, object storage, identity providers, email delivery, and security checksKeeps critical infrastructure policy outside the visual layer

This boundary is important. A blog post should still render if the API is down. A project case study should still be indexable even if comments fail. On the other hand, saved posts, auth state, profile media, and contact verification should not be pushed into a static-only model just because the public site looks editorial.

The Frontend Shell

The frontend is built on the Next.js App Router. The root layout acts as the control room for the experience:

  • global styles are loaded once
  • providers wrap the tree
  • the preloader receives a site asset manifest
  • route asset prefetching starts early
  • smooth scrolling and click effects wrap the visible experience
  • the navigation is always present and theme-aware
  • a grain overlay unifies the visual texture across pages

This is a subtle but meaningful design choice. Instead of each page owning its own startup logic, the system has a shared shell that coordinates startup, navigation, asset warming, and motion timing.

The layout is assembled in a deliberate order:

  1. The root application shell mounts the global providers and establishes shared state.
  2. Booking embeds and startup systems initialize early.
  3. The preloader receives the asset manifest and begins warming critical assets.
  4. The route prefetcher starts preparing likely next-route media.
  5. Smooth scrolling wraps the visible interface so navigation and page content move within one motion system.
  6. The navbar and the active page render inside that shell.
  7. The grain overlay sits over the final composition to keep the visual texture consistent.

The shell is not decorative. It is the part of the app that makes different pages feel like one authored experience.

Motion As Architecture, Not Ornament

There are several animation systems in the frontend, but they are not all doing the same job.

Motion layerPurpose
Preloader revealControls the very first impression and synchronizes asset readiness
Section entrance animationGives blocks a readable reveal order
Scroll-linked motionLets titles, hero elements, and article surfaces respond to position
Route transition overlayMakes article navigation feel continuous instead of abrupt
Dynamic navbar themingKeeps the nav readable as backgrounds shift from light to dark

One example I particularly like is the article transition link. Instead of navigating instantly from a blog list or project card to the article page, the site creates a temporary overlay with split panels and a loading label, animates it in, performs navigation, and then animates out. This creates continuity between source and destination without forcing the page itself to become over-animated.

Another good example is the navbar theme logic. It does not simply use a route-level light or dark variant. It probes the live DOM near the top of the screen, checks what kind of surface is underneath, and adapts so the nav remains legible.

That is a small detail, but it reveals the design philosophy of the whole project: motion and styling should respond to context, not just route names.

The Content Layer

The blog and work sections are built from local MDX files. This is one of the most important architectural choices in the project because it keeps editorial content in the same review and deployment path as the rest of the codebase.

The content pipeline follows the same shape for both blogs and projects:

SourceLoaderProcessing stepsDestination
blog MDX contentcontent loaderparse frontmatter, normalize article body, derive slug from content namingblog article pages
project MDX contentcontent loaderparse frontmatter, normalize article body, derive slug from content namingproject article pages

There are two notable parts here.

First, the app does not use a heavy MDX runtime. It uses lightweight content parsing utilities that extract frontmatter, normalize text, and strip scaffold markup before handing the article body to a custom renderer.

Second, blog posts and project case studies share the same long-form article shell. That means the typography, sidebar metadata, entrance choreography, cover media, save actions, link copying, and comment thread behavior all stay consistent across both kinds of long-form content.

That shared shell is a very good architectural decision. It avoids the common problem where blog pages and work pages start drifting into two separate design systems.

The MDX Renderer

The custom renderer deserves its own section because it explains a lot about how the site balances flexibility and control.

Instead of allowing arbitrary embedded React everywhere, the renderer supports a deliberate subset:

  • headings
  • paragraphs
  • inline links, bold, italics, and code
  • quotes
  • ordered and unordered lists
  • tables
  • fenced code blocks
  • images
  • a small set of purpose-built content blocks for richer media where needed

That constraint is healthy. It gives me enough expressive power to write rich case studies, but it keeps the article surface stable and predictable. I do not need to design a bespoke layout for every post just because I can.

It also means the article format stays consistent. Instead of leaning on arbitrary embed logic, the post can express architecture through normal markdown structure: sections, tables, numbered flows, and short explanatory blocks.

The Homepage As A Composed System

The homepage is not one monolithic hero section. It is assembled from multiple content and motion modules: hero, signal strip, text showcases, latest sections, expansion imagery, partner copy, mountain transitions, and CTA surfaces.

The homepage itself pulls its latest work and latest writing from the content layer, not from hardcoded text inside the page composition. That matters because the homepage stays connected to the content system instead of becoming a manually maintained duplicate.

The signal strip is also worth calling out. It mixes:

  • stat cycling
  • animated social icons
  • CTA routing
  • visual synchronization with the preloader event

Even something that looks small at the top of the page is actually connected to the startup choreography and shared content data.

State On The Frontend

There are three meaningful state layers in the frontend.

LayerResponsibility
local component statetemporary UI state, animations, form state, tabs, notices
shared React contextauth, saved posts, public profile cache, live presence, weather, music
lightweight memory cacheshort-lived API response reuse and stale-while-refresh behavior

This is a nice middle ground. The site does not need a full client state framework, but it also does not pretend that a product with profiles, comments, saved posts, and live widgets can be handled cleanly with prop drilling alone.

Authenticated Session State

The authenticated session layer owns the current user and auth status. Its behavior is practical:

  • initialize from localStorage when possible
  • attempt a current-user check
  • on unauthorized response, try the refresh token flow
  • if refresh succeeds, retry the user fetch
  • if refresh fails, clear tokens and reset state

The result is a user experience that feels persistent without requiring every page to re-implement auth recovery logic.

Ambient App Data

The second major layer combines data that is not strictly auth but still feels application-wide:

  • presence data
  • music
  • weather
  • workspace presence snapshots
  • the authenticated user's public profile mirror
  • the authenticated user's saved posts

This context is what gives the about/profile areas their sense of being alive instead of merely static.

In-Memory Cache

The in-memory cache is simple but effective. It stores values with an expiry timestamp and supports:

  • writing a fresh entry
  • reading a cached entry
  • invalidating stale entries
  • clearing cached state when needed

That cache is then used in places like auth user loading, profile data, comments, and saved posts. The pattern is consistent:

Cache stateUI behaviorNetwork behavior
entry is freshuse cached data immediatelyskip fetch
entry is stalerender cached data firstrefresh in the background
entry is missingshow loading or empty statefetch from the network

For a site like this, that is often more useful than adding a much heavier caching framework.

Profile Data And Identity Merging

The profile system is one of the more interesting parts of the frontend because it merges public and private data intelligently.

The public profile endpoint is what any visitor can request. If the logged-in user happens to be viewing their own profile, the frontend also fetches the private authenticated profile and merges the two.

That allows one route to serve two roles:

  • public showcase for visitors
  • management surface for the owner

The profile page then layers on:

  • banner and avatar upload flows
  • save/remove behavior for saved posts
  • comments history presentation
  • owner-only settings modal
  • share actions
  • role badges
  • upload progress and notices

From the user's perspective it feels like one page. Underneath, it is a carefully stitched combination of public content, private state, optimistic UI, and cache-aware refresh behavior.

Presentation Adapters And Live External Data

The frontend also contains a small set of server-side adapters. These are not replacements for the main backend. They act as presentation-facing adapters for external data that belongs close to the UI.

AdapterMain purpose
content countscount local blog and project entries for navigation labels
weather feedfetch and normalize weather data
music feedfetch recent listening data and enrich it with album metadata
album lookupmap a collection id to richer track data
workspace presence feedfetch live workspace activity and fall back to cached snapshots

This is a smart split of concerns.

The Python backend handles identity and persistent application data.

The frontend adapter layer handles presentation-adjacent external information:

  • things the UI wants frequently
  • things that may need small reshaping
  • things that benefit from server-side secret usage or caching

The music adapter is a good example. It does more than proxy a listening feed. It also:

  • normalizes song, artist, and album fields
  • searches for richer album metadata
  • scores possible matches
  • upgrades generic artwork when possible
  • returns a frontend-ready track object

This kind of adaptation keeps the UI components simple.

Workspace Presence And Resilience

The workspace activity system has a nice resilience pattern.

On the client side, the site listens to live presence data. On the server side, the workspace presence adapter also consults a cached snapshot when live data is missing.

That creates a layered fallback model:

PrioritySourceBehavior
firstlive workspace activityserve immediately when current live presence exists
secondcached workspace snapshotreturn the latest saved snapshot if live data is missing
final fallbackclean null statekeep the UI stable instead of faking stale activity indefinitely

This is a good example of architectural maturity in a small product. Live features are allowed to fail gracefully instead of breaking the page's emotional tone.

Asset Loading, Preload Warming, And Perceived Performance

The preloader system is one of the most defining parts of the experience architecture.

At build time and route-composition time, the app creates an asset manifest. That manifest includes:

  • critical fonts
  • transition sprite assets
  • homepage images
  • about page avatar
  • project cover media
  • blog cover media
  • route-group assets

The preloader then:

  1. warms preload links
  2. discovers already rendered assets
  3. loads images and videos
  4. waits for fonts
  5. updates progress UI
  6. dispatches preloader-complete
  7. lets page-level reveal animations begin

The startup handoff is straightforward:

  1. The manifest is prepared.
  2. Preload links are warmed.
  3. Critical assets and fonts are loaded.
  4. Progress reaches completion.
  5. The app dispatches preloader-complete.
  6. Page-level and section-level reveal animations begin.

After first load, the route asset prefetcher uses pathname matching to warm assets for the active route group. This makes the site feel more intentional on subsequent navigation, especially for heavy blog and project media.

In other words, loading is treated as part of the design, not just a technical waiting period.

The Frontend API Contract

The frontend talks to the backend through a typed client API layer. That layer does a surprising amount of heavy lifting:

  • injects bearer tokens
  • includes credentials
  • handles JSON serialization
  • times out slow requests
  • retries protected requests after refresh when appropriate
  • stores refreshed tokens
  • normalizes API errors
  • exposes logical resource clients for auth, users, comments, saved posts, and contact workflows

This is a useful abstraction because it prevents token, timeout, and error logic from leaking into every UI component.

The popup helper for OAuth also lives here. That matters because both login/signup flows and account-linking flows need consistent popup behavior.

The Backend Platform Layer

The FastAPI backend is small, but it is organized like an actual service rather than a collection of unrelated endpoints.

The platform layer includes:

  • configuration via pydantic-settings
  • SQLAlchemy engine and session management
  • connection pool tuning
  • CORS configuration for allowed frontend origins
  • SlowAPI rate limiting
  • environment-sensitive docs and OpenAPI exposure
  • a global exception handler with safe production responses
  • a lifespan hook that disposes the database engine cleanly

The backend's job is not to look impressive. Its job is to be predictable, safe, and clear about where policy lives.

Data Model Design

The data model is compact and focused.

ModelMain purpose
Usercore identity, role, profile fields, session versioning, soft delete flags
provider identityprovider-specific login links for email, Google, and GitHub
verification sessionopaque OTP sessions for registration and password reset
Commentthreaded discussion attached to a slug and content type
Reactionper-user reactions on comments
SavedPostper-user saved blog or project entry

Two design choices stand out here.

First, content references use a stable content identifier rather than direct foreign keys into a content table. That works because blog and project content live as authored files, not as database rows.

Second, OTP flows are backed by server-owned verification sessions rather than embedding all session state in the client. That allows registration and reset flows to use opaque session IDs with attempt counters and expiration timestamps.

The Authentication Model

The auth layer supports:

  • email registration with OTP verification
  • email login
  • public reactivation after soft delete
  • refresh tokens
  • logout by token version invalidation
  • forgot password and reset-password flows
  • Google OAuth
  • GitHub OAuth
  • provider linking and unlinking

This is more than enough for a portfolio app, and that is exactly why the structure matters.

Email Registration

The registration flow is intentionally server-owned:

StepActorActionResult
1frontendsubmit registration requestbackend receives email, username, and password candidate
2backendcreate a verification sessionemail, username, hashed password, hashed OTP, expiry window, and attempt budget are stored server-side
3backendsend verification emailuser receives OTP
4user + frontendsubmit OTPbackend verifies the session and code
5backendfinalize account creationcore user and email identity records are created

This is safer and more controllable than a pure client-side staged signup flow.

OAuth Login

OAuth is handled carefully. The backend signs and timestamps state values, validates freshness, and uses a short-lived exchange code so API bearer tokens are not exposed directly in the redirect URL.

StepActorActionResult
1frontendopen provider auth flowbackend prepares provider redirect
2backendbuild signed and timestamped OAuth statecallback can later be validated for freshness and integrity
3providerredirect back to backend callbackbackend receives provider response
4backendvalidate state and resolve userlogin or link target is safely identified
5backendissue a short-lived exchange codelong-lived API tokens stay out of the URL
6frontendexchange that code through the callback surfacefrontend receives the real token pair

This is a much better flow than passing long-lived tokens around in the query string.

Token Versioning

The system uses session versioning. Tokens include that version, and the auth middleware rejects tokens whose version no longer matches.

That provides clean invalidation for:

  • logout
  • permanent password reset
  • deactivation/reactivation
  • anonymization events

At the same time, the password-change flow for an already-authenticated user is careful not to throw them out unnecessarily. It updates the password and returns a fresh token pair while keeping the active session stable.

That distinction matters. Security should be strict, but it should also be sensible.

Account Linking And Provider Identity

The provider identity model lets a single user connect multiple providers. That powers:

  • email/password login
  • Google login
  • GitHub login
  • popup-based provider linking from profile settings

The linking flow is especially important because it shows how the frontend and backend cooperate well.

  • the frontend requests a provider-specific auth URL from the backend
  • the auth flow happens inside a popup
  • the backend redirects linking results to the frontend auth callback surface
  • the popup posts a message back to the opener
  • the settings modal refreshes state without reloading the main page

That is a clean, product-friendly pattern.

User Profiles, Media, And Account Lifecycle

The user/profile layer goes beyond simple CRUD.

Public Profile Assembly

The backend profile service returns a public profile plus computed activity data:

  • comment count
  • recent comments
  • password presence flags where appropriate

It also performs lazy deletion checks, so an account whose scheduled deletion window has passed can be anonymized during access rather than relying only on background cleanup.

Media Uploads

Avatar and banner uploads use direct-to-object-storage presigned URLs. The flow is:

StepActorActionResult
1frontendrequest presigned uploadbackend prepares upload contract
2backendreturn upload URL, storage key, and confirm tokenbrowser has everything needed for direct upload
3browserupload file directly to object storagebackend bandwidth is bypassed for the file body
4frontendconfirm upload with backendbackend validates the trusted upload details
5backendpersist final media URL on the user recordavatar or banner becomes part of the profile

The confirm token binds the upload to:

  • the user
  • the upload key
  • the upload purpose (avatar or banner)

That keeps uploads efficient without turning object storage into an unaudited write surface.

Deactivation And Anonymization

Account deletion is modeled as a soft-delete grace period rather than immediate destruction.

  1. deactivation marks the account inactive
  2. a scheduled deletion timestamp is set for the grace window
  3. token version increments to invalidate active sessions
  4. reactivation can restore access within the grace window
  5. once expired, anonymization wipes PII, media, reactions, and saved-post state

This is a thoughtful lifecycle model. It protects users from accidental loss while still allowing eventual cleanup and privacy enforcement.

Comments, Replies, Reactions, And Saved Posts

The community layer is deliberately scoped, but well designed.

Comments

Comments are stored against stable content identifiers. Replies are nested through parent relationships. The backend fetches top-level comments, then walks descendants in bounded batches so deep or active threads cannot explode memory usage.

The frontend comment section adds:

  • optimistic posting
  • short-lived cache reuse
  • stale background refresh
  • recent-comment updates into profile context

That combination makes the thread feel responsive without making the data model fragile.

Reactions

Reactions are scoped per user, per comment, per reaction type. The model enforces uniqueness so one user cannot spam the same reaction repeatedly.

Saved Posts

Saved posts use a lightweight model keyed by:

  • user id
  • content identifier
  • content type

The frontend again mirrors the backend well:

  • fetch status per article
  • fetch full saved list for the owner
  • optimistically remove when unsaving
  • keep owner-only visibility on the profile page

This is a nice pattern for private reader utility features on top of static editorial content.

Contact Form And Protected Messaging

The contact system is more sophisticated than a plain email link.

On the frontend, the contact page includes:

  • help-topic selection
  • structured field validation
  • a Turnstile widget when a site key is present
  • submit, success, and failure states

On the backend:

  • Turnstile tokens are verified server-side
  • remote IP is forwarded when possible
  • hostname restrictions can be enforced
  • errors are translated into user-readable messages
  • successful submissions produce a styled email notification

The protected messaging flow is:

StepActorActionResult
1frontendcollect message fields and Turnstile tokenrequest is prepared with both content and protection proof
2frontendsend payload to the contact endpointbackend receives the submission
3backendverify the Turnstile tokeninvalid, expired, or misconfigured checks are rejected clearly
4backendformat successful submissions into emaila styled contact notification is produced and sent

This is a good example of where a portfolio site becomes a real product surface. Once strangers can send input, protection, messaging quality, and failure handling all matter.

Mail, OTP, And Operational Messaging

The backend mail service handles three different classes of outbound email:

  • contact notifications
  • registration OTP emails
  • password reset OTP emails

The contact email has a more branded HTML layout, while registration and reset flows use simpler utility-first templates. That separation makes sense:

  • contact mail is for me as the receiver
  • OTP mail is for users who need fast clarity over design complexity

This is a subtle but correct prioritization of communication goals.

Backend Policies, Caching, And HTTP Behavior

The codebase also uses HTTP semantics quite intentionally.

  • public profile responses use controlled browser and edge caching
  • comments and saved-post status responses are no-store
  • weather uses timed revalidation
  • music and workspace data stay dynamic where needed
  • frontend content pages are statically generated when possible

This mix matters. Not all data should be treated the same:

  • authored content can be mostly static
  • live presence needs fresh reads
  • authenticated state should not be cached recklessly
  • external data can often tolerate controlled revalidation

This is exactly the kind of judgment that separates a clean architecture from a pile of working endpoints.

Strengths Of The Current Architecture

Several things are working especially well together here.

StrengthWhy it matters
static content + dynamic interaction splitkeeps public pages fast without sacrificing product features
shared article shell for blog and workreduces UI drift and duplication
typed frontend API layercentralizes auth, refresh, timeout, and error behavior
small but useful caching strategyimproves responsiveness without a complex state stack
popup-based OAuth linkingavoids awkward full-page reloads in settings flows
presigned media uploadskeeps backend bandwidth and storage trust under control
graceful fallback for live datakeeps the site feeling stable even when external sources wobble

Current Tradeoffs And Constraints

No architecture is perfect, and this one has conscious constraints.

  • The custom MDX renderer keeps content controlled, but it also limits richer embedded visualizations unless new components are added.
  • The lightweight cache is easy to reason about, but it does not replace full query invalidation tooling if the app grows much further.
  • Comments are intentionally scoped and simple; moderation and abuse tooling are still minimal.
  • The presentation adapters that normalize external data are helpful, but they also mean some presentation logic now lives server-side near the frontend.
  • The system is carefully split, but that also means frontend and backend changes need coordination when auth or profile flows evolve.

These are reasonable tradeoffs for the current product size. They are signs of deliberate scope, not negligence.

Final Perspective

abhinavflac works because it respects the difference between what should be static, what should be dynamic, and what should be protected.

The static side gives the site clarity:

  • MDX content
  • generated routes
  • shared article surfaces
  • predictable page composition

The dynamic side gives it life:

  • music
  • presence
  • workspace activity
  • weather
  • comments
  • saved posts
  • profile updates

The protected side gives it trust:

  • OTP sessions
  • token versioning
  • OAuth state signing
  • upload confirmation tokens
  • Turnstile verification
  • account lifecycle enforcement

That balance is the real architecture.

The visual layer is what people notice first. The system boundaries are what make the whole thing hold together after that first impression is over.

READY FOR THE NEXT ONE?
Discussion00

Sign in to join the discussion.