Back to Blog
Full Stack7 min read

Building a Video Streaming Platform: Architecture & Code

A deep dive into architecting and building a production-ready video streaming platform using Node.js, AWS, and adaptive bitrate streaming.

Jay Salot

Jay Salot

Sr. Full Stack Developer

May 9, 2026 · 7 min read

Share
Programming and web development

Last year, I built a streaming platform for an edtech client who wanted to deliver course videos to thousands of students simultaneously. Sounds straightforward until you realize streaming movies or any video content at scale involves way more than just serving MP4 files from S3. Let me walk you through the real architecture, code, and gotchas I encountered.

Why You Can't Just Serve MP4 Files

My first instinct was simple: upload videos to S3, generate signed URLs, stick them in a <video> tag. Done, right? Wrong.

The problem is user bandwidth varies wildly. Someone on fiber gets a 4K stream while someone on mobile data can't even load 480p without buffering every 10 seconds. You need adaptive bitrate streaming (ABR) where the player automatically switches quality based on network conditions.

This means converting your source video into multiple resolutions and breaking them into small chunks. The industry standards are HLS (HTTP Live Streaming) and DASH. I went with HLS because it works everywhere—iOS, Android, modern browsers—without plugins.

Video Transcoding Pipeline

Here's where things get expensive if you're not careful. Every uploaded video needs transcoding into multiple bitrates. I've tried three approaches:

Approach 1: FFmpeg on EC2

Initially, I spun up EC2 instances running FFmpeg. The gotcha here is transcoding is CPU-intensive and slow. A 1-hour 1080p video took about 20 minutes to transcode on a c5.2xlarge instance, which costs money while it churns away.

// Worker process for video transcoding
const ffmpeg = require('fluent-ffmpeg');
const AWS = require('aws-sdk');
const s3 = new AWS.S3();

async function transcodeVideo(inputKey, outputPrefix) {
  const resolutions = [
    { width: 1920, height: 1080, bitrate: '5000k', name: '1080p' },
    { width: 1280, height: 720, bitrate: '2800k', name: '720p' },
    { width: 854, height: 480, bitrate: '1400k', name: '480p' },
    { width: 640, height: 360, bitrate: '800k', name: '360p' }
  ];

  const inputStream = s3.getObject({
    Bucket: process.env.SOURCE_BUCKET,
    Key: inputKey
  }).createReadStream();

  const transcodePromises = resolutions.map(res => {
    return new Promise((resolve, reject) => {
      const outputPath = `/tmp/${res.name}.m3u8`;
      
      ffmpeg(inputStream)
        .outputOptions([
          `-vf scale=${res.width}:${res.height}`,
          `-c:v h264`,
          `-b:v ${res.bitrate}`,
          `-c:a aac`,
          `-hls_time 10`,
          `-hls_playlist_type vod`,
          `-hls_segment_filename /tmp/${res.name}_%03d.ts`
        ])
        .output(outputPath)
        .on('end', async () => {
          // Upload segments and playlist to S3
          await uploadHLSFiles(outputPath, `${outputPrefix}/${res.name}`);
          resolve();
        })
        .on('error', reject)
        .run();
    });
  });

  await Promise.all(transcodePromises);
  await generateMasterPlaylist(outputPrefix, resolutions);
}

This works but doesn't scale well. You need a queue (SQS), worker management, autoscaling—it's a whole infrastructure headache.

Approach 2: AWS MediaConvert

I switched to AWS MediaConvert and honestly, it's worth every penny. Serverless transcoding that scales automatically. You pay per minute of output video, which is way more predictable than managing EC2 capacity.

import { MediaConvertClient, CreateJobCommand } from '@aws-sdk/client-mediaconvert';

interface TranscodeJobParams {
  sourceKey: string;
  destinationPrefix: string;
}

async function createTranscodeJob({ sourceKey, destinationPrefix }: TranscodeJobParams) {
  const client = new MediaConvertClient({ region: 'us-east-1' });
  
  const jobSettings = {
    Role: process.env.MEDIACONVERT_ROLE_ARN!,
    Settings: {
      Inputs: [{
        FileInput: `s3://${process.env.SOURCE_BUCKET}/${sourceKey}`,
        AudioSelectors: { 'Audio Selector 1': { DefaultSelection: 'DEFAULT' } },
        VideoSelector: {}
      }],
      OutputGroups: [{
        Name: 'Apple HLS',
        OutputGroupSettings: {
          Type: 'HLS_GROUP_SETTINGS',
          HlsGroupSettings: {
            Destination: `s3://${process.env.OUTPUT_BUCKET}/${destinationPrefix}/`,
            SegmentLength: 10,
            MinSegmentLength: 0
          }
        },
        Outputs: [
          { NameModifier: '_1080p', VideoDescription: { Width: 1920, Height: 1080, CodecSettings: { Codec: 'H_264', H264Settings: { Bitrate: 5000000 } } } },
          { NameModifier: '_720p', VideoDescription: { Width: 1280, Height: 720, CodecSettings: { Codec: 'H_264', H264Settings: { Bitrate: 2800000 } } } },
          { NameModifier: '_480p', VideoDescription: { Width: 854, Height: 480, CodecSettings: { Codec: 'H_264', H264Settings: { Bitrate: 1400000 } } } }
        ]
      }]
    }
  };

  const command = new CreateJobCommand(jobSettings);
  const response = await client.send(command);
  
  return response.Job?.Id;
}

The workflow is: upload source video → trigger Lambda → create MediaConvert job → get notified via EventBridge when complete → update database with HLS manifest URL.

Serving Video Content

Once you have HLS playlists and segments in S3, serving them is the easy part. But there are still considerations.

CloudFront Distribution

Never serve video directly from S3. You need a CDN. I set up CloudFront with these specific behaviors:

  • Cache .m3u8 playlists for 60 seconds - They update infrequently but you want some flexibility
  • Cache .ts segments for 1 year - They never change once generated
  • Enable CORS - Browsers need this for video requests from different origins
  • Signed URLs for protected content - More on this below

Authentication and Authorization

For paid content or user-specific videos, you can't just make files public. I use CloudFront signed URLs generated by the backend:

const AWS = require('aws-sdk');
const cloudfront = new AWS.CloudFront.Signer(
  process.env.CLOUDFRONT_KEY_PAIR_ID,
  process.env.CLOUDFRONT_PRIVATE_KEY
);

function generateSignedUrl(videoKey, userId) {
  // Verify user has access to this video
  // ... authorization logic ...

  const url = `https://${process.env.CLOUDFRONT_DOMAIN}/${videoKey}`;
  const expireTime = Math.floor(Date.now() / 1000) + 3600; // 1 hour

  return cloudfront.getSignedUrl({
    url,
    expires: expireTime
  });
}

app.get('/api/videos/:videoId/stream', authenticateUser, async (req, res) => {
  const { videoId } = req.params;
  const video = await Video.findById(videoId);
  
  if (!video) return res.status(404).json({ error: 'Not found' });
  if (!canUserAccessVideo(req.user.id, video)) {
    return res.status(403).json({ error: 'Forbidden' });
  }

  const signedUrl = generateSignedUrl(video.manifestKey, req.user.id);
  res.json({ streamUrl: signedUrl });
});

The frontend requests the stream URL from your API, which returns a time-limited signed URL. Video.js or HLS.js players handle the rest.

Frontend Player Implementation

For the React frontend, I use Video.js with the HLS plugin. It's battle-tested and handles ABR seamlessly.

import React, { useEffect, useRef } from 'react';
import videojs from 'video.js';
import 'video.js/dist/video-js.css';

interface VideoPlayerProps {
  videoId: string;
}

export const VideoPlayer: React.FC<VideoPlayerProps> = ({ videoId }) => {
  const videoRef = useRef<HTMLVideoElement>(null);
  const playerRef = useRef<any>(null);

  useEffect(() => {
    if (!videoRef.current) return;

    const fetchStreamUrl = async () => {
      const response = await fetch(`/api/videos/${videoId}/stream`, {
        headers: { 'Authorization': `Bearer ${localStorage.getItem('token')}` }
      });
      const { streamUrl } = await response.json();

      playerRef.current = videojs(videoRef.current, {
        controls: true,
        autoplay: false,
        preload: 'auto',
        fluid: true,
        sources: [{ src: streamUrl, type: 'application/x-mpegURL' }]
      });

      // Track watch progress
      playerRef.current.on('timeupdate', () => {
        const currentTime = playerRef.current.currentTime();
        saveWatchProgress(videoId, currentTime);
      });
    };

    fetchStreamUrl();

    return () => {
      if (playerRef.current) {
        playerRef.current.dispose();
      }
    };
  }, [videoId]);

  return (
    <div data-vjs-player>
      <video ref={videoRef} className="video-js" />
    </div>
  );
};

Handling Live Streaming

Live streaming is a different beast. For a live event feature, I used AWS MediaLive + MediaPackage. The setup is complex but it works:

  • MediaLive ingests RTMP streams (from OBS, mobile apps, etc.)
  • MediaPackage creates the HLS/DASH manifests and handles DVR functionality
  • CloudFront distributes to viewers

The cost bit me here—MediaLive charges per input hour regardless of viewers, so a 24/7 channel gets expensive fast. For sporadic live events, starting/stopping channels programmatically via the SDK is crucial.

Monitoring and Analytics

Users will complain about buffering before anything else. I track these metrics:

  • Buffering events - How often and how long
  • Bitrate distribution - Are most users getting 1080p or stuck at 360p?
  • Geographic performance - CloudFront edge location issues
  • Error rates - Failed segment loads, manifest errors

I send events from the video player to a backend endpoint that logs to BigQuery for analysis:

playerRef.current.on('waiting', () => {
  fetch('/api/analytics/video-event', {
    method: 'POST',
    headers: { 'Content-Type': 'application/json' },
    body: JSON.stringify({
      eventType: 'buffering',
      videoId,
      timestamp: Date.now(),
      currentQuality: playerRef.current.qualityLevels().selectedIndex,
      userAgent: navigator.userAgent
    })
  });
});

Cost Optimization Lessons

Streaming gets expensive. Here's what actually moved the needle for me:

1. Aggressive caching: CloudFront cache hit rates above 90% are achievable. Monitor your cache stats.

2. Smart transcoding: Don't transcode every resolution for every video. For short clips under 2 minutes, just do 720p and 360p. Users won't notice on short content.

3. Lifecycle policies: Move infrequently accessed source videos to S3 Glacier after 90 days. Keep the HLS outputs in Standard since they're smaller and accessed more.

4. Regional considerations: If 80% of your users are in one region, use a regional MediaConvert queue there instead of us-east-1. Data transfer costs matter.

Key Takeaways

Building a production streaming platform taught me that the actual video player code is maybe 10% of the work. The other 90% is transcoding pipelines, CDN configuration, monitoring, and cost management.

If I were starting from scratch today, I'd use AWS MediaConvert for transcoding (don't roll your own), CloudFront for delivery (mandatory), and Video.js on the frontend (mature and reliable). Skip the complexity of managing your own FFmpeg workers unless you have very specific needs that justify the operational overhead.

The biggest gotcha? Underestimating bandwidth costs. At scale, delivering video is expensive. Plan your pricing model accordingly and optimize aggressively from day one.

#Video Streaming#AWS#Node.js#React#Cloud Architecture
Share

Related Articles