Guide to manually package a Node.js Express app, upload ZIP to S3, and update an existing AWS Lambda function, including handler setup and testing via Function URL.
In this guide you’ll manually deploy a Node.js Express application to AWS Lambda (Node.js 20.x). We will package the app, upload the ZIP to S3, and update an existing Lambda function. In a follow-up lesson we’ll automate this with Jenkins Pipelines.Below are the AWS objects created for this deployment and their purposes.
Resource Type
Name / Identifier
Purpose
S3 Bucket
solar-system-lambda-bucket
Store the deployment ZIP for Lambda
Lambda Function
solar-system-function
The existing function we will update
Function URL
(Lambda Function URL)
Public endpoint to verify UI and behavior
We will create a ZIP archive of the minimal application artifacts, upload it to the S3 bucket above, and use aws lambda update-function-code to point the Lambda function to that S3 object. The Lambda function already exists; we will update it instead of creating a new function.
Key Lambda details:
Runtime: Node.js 20.x
Handler: app.handler (project uses app.js)
Current package size: ~10 MB
The packaged application contains server-side JavaScript plus static assets (HTML, CSS, images). Example static HTML snippet found in the repo:
Important: the deployed Lambda currently has MongoDB credentials stored as plaintext environment variables.
Do not store sensitive secrets in plain environment variables for production. Use a secrets manager and grant the Lambda execution role least privilege.
Relevant excerpt from app.js showing serverless-http usage and MongoDB env var consumption:
Copy
const path = require('path');const fs = require('fs');const express = require('express');const os = require('os');const bodyParser = require('body-parser');const mongoose = require('mongoose');const cors = require('cors');const serverless = require('serverless-http');const app = express();app.use(bodyParser.json());app.use(express.static(path.join(__dirname, '/')));app.use(cors());mongoose.connect(process.env.MONGO_URI, { user: process.env.MONGO_USERNAME, pass: process.env.MONGO_PASSWORD, useNewUrlParser: true, useUnifiedTopology: true}, function(err) { if (err) { console.error('MongoDB connection error:', err); }});// If running locally, the app might have these lines:// app.listen(3000, () => { console.log("Server successfully running on port - " + 3000); })// module.exports = app;// For Lambda we must export the serverless handler:module.exports.handler = serverless(app);
Deployment checklist (high-level)
Clone the repository and install dependencies.
Apply any app changes to verify deployment (e.g., bump a visible version string).
Remove or comment local server start code (app.listen) and ensure the Lambda handler export is present.
Zip the minimal files required by Lambda (app files, package.json, index.html, node_modules).
Upload the ZIP to S3.
Update the Lambda function using the S3 object via AWS CLI.
Retrieve and test the Function URL.
Recommended shell commands (run from a workspace/sandbox):
Copy
# clone and enter repo (example)git clone https://gitlab.com/dasher-org/solar-system.gitcd solar-system# install dependencies (creates node_modules)npm install# update index.html version (example using sed)# (This is just an example change to indicate a new version in the UI)sed -i 's/SOLAR SYSTEM 3.0/SOLAR SYSTEM 4.0/' index.html# Modify app.js for Lambda:# Comment any local app.listen line and module.exports = app;sed -i 's/^app\.listen(3000.*$/\/\/&/' app.jssed -i 's/^module\.exports = app;/\/\/&/' app.js# Ensure the serverless handler line is uncommented:sed -i 's/^\/\/module\.exports\.handler/module.exports.handler/' app.js# Verify the last 5 lines to confirm changestail -n 5 app.js# Create the deployment ZIP with the needed files (app*, package*, index.html, node*)zip -qr solar-system-lambda.zip app* package* index.html node*# Upload the ZIP to S3aws s3 cp solar-system-lambda.zip s3://solar-system-lambda-bucket/solar-system-lambda.zip# Update the Lambda function from the S3 objectaws lambda update-function-code --function-name solar-system-function --s3-bucket solar-system-lambda-bucket --s3-key solar-system-lambda.zip# Retrieve function URL config to get the FunctionUrlaws lambda get-function-url-config --function-name solar-system-function
After uploading the ZIP, verify the S3 object exists in the console.
When the CLI update completes, it returns function metadata (JSON) including LastModified, CodeSize, and the environment variables currently configured. Example (truncated):