Join the EduLinkUp Team
Be part of a mission to transform education. Work with passionate people, gain real-world experience, and make an impact.
Front-End Web Development
Back-End Web Development
Full Stack Development
Data Science
AI / Machine Learning
Cyber Security
Python Development
Blockchain Technology
IoT (Internet of Things)
Cloud Computing
Graphic Design
Campus Ambassador
Cohort 2 Applications Are Now Open!
Note: Cohort 1 is still ongoing but applications are closed
Internship Manual
Review intern responsibilities before applying
Built by Our Interns
Real projects shipped by EduLinkUp interns - live, open-source, and production-ready.
House Price Predictor
by Anisha Shaw
This project focuses on predicting house prices using basic machine learning techniques in Python. It is designed to be easy to understand, especially for beginners learning data science and ML. Built a house price prediction system using Python and machine learning. -Performed Exploratory Data Analysis (EDA) to understand feature relationships. -Handled missing values and applied feature scaling & encoding. -Trained a Linear Regression model using scikit-learn. -Evaluated the model using RMSE, MAE, and R² score. -Visualized results with actual vs predicted price plots. -Saved the trained model for future use or deployment.
ml_foundationsSecurity Report Generator
by Eishit Balyan
I built a security vulnerability dashboard that takes Nessus scanner CSV files and actually makes them useful. when you run a vulnerability scan you get a massive CSV dumped on you and figuring out what to fix first is genuinely painful. So the app ingests that file, parses out all the assets and vulnerabilities, stores everything in a proper database, and then automatically calculates a risk score for every asset. The formula multiplies each vulnerability's CVSS score by a severity weight and rolls it up by asset criticality — so you immediately know which machines are most exposed and why. On top of that it hits the National Vulnerability Database API in the background for every CVE it finds. So within about thirty seconds of uploading a scan you've got CWE classifications, official CVSS scores, patch availability, and full descriptions pulled from NVD automatically — no manual lookups. The frontend is a React dashboard. There's a risk ranking table where you can click any asset, see every CVE on it with direct NVD links, and adjust its criticality score in real time. Charts, severity breakdowns, scan history — all of it updates without page reloads. The feature I'm most proud of is the one-click PDF report generator. Click export and you get a proper multi-page PDF — branded cover page, executive summary written from actual data, severity charts, full findings breakdown, and auto-generated recommendations based on what was found. Something you could actually hand to a manager or auditor. Stack is FastAPI, SQLAlchemy, SQLite on the backend and React with plain CSS on the frontend.
linux_toolsTitanic Survival Predictor
by MALLESWARAPU SRIYA
I built a machine learning model that predicts whether a Titanic passenger would survive based on features like age, gender, passenger class, fare, and family information. The project involved data preprocessing, handling missing values, encoding categorical variables, and creating new features such as family size and passenger titles. I trained and compared multiple models including Logistic Regression, Decision Tree, Random Forest, and SVM, and selected Random Forest as the best-performing model after hyperparameter tuning. Finally, I deployed the model using a Streamlit web application where users can enter passenger details and get real-time survival predictions.
ml_sklearn
Auto-Scaling Web Service
by Kavinaya B S
Designed and deployed a highly available and scalable web application using Amazon EC2, Application Load Balancer, and Auto Scaling Groups with CPU-based target tracking policies. Implemented dynamic scaling to automatically increase instances during traffic spikes and reduce capacity during low demand, ensuring performance and cost optimization. Integrated CloudWatch monitoring and SNS notifications to track scaling events and system health across multiple Availability Zones.
aws_fundamentalsDiabetes Risk Assessment
by Tanuja Sandip Nalage
This project builds a Diabetes Risk Assessment system using machine learning on the Pima Indians Diabetes dataset. The workflow includes data cleaning of medically invalid zero values, median imputation, feature engineering (BMI categories, age groups, glucose levels), and standardization. Multiple classification models (Logistic Regression and Naive Bayes) are trained and evaluated using accuracy, precision, recall, F1-score, and confusion matrices. The project emphasizes recall as a critical metric for medical screening to minimize false negatives (missing diabetic patients). Health insights are derived from key risk factors such as glucose level, BMI, age, and insulin patterns. A trained model is saved and the complete implementation is documented with medical disclaimers and ethical considerations.
ml_sklearnHouse Price Predictor
by Anisha Shaw
This project focuses on predicting house prices using basic machine learning techniques in Python. It is designed to be easy to understand, especially for beginners learning data science and ML. Built a house price prediction system using Python and machine learning. -Performed Exploratory Data Analysis (EDA) to understand feature relationships. -Handled missing values and applied feature scaling & encoding. -Trained a Linear Regression model using scikit-learn. -Evaluated the model using RMSE, MAE, and R² score. -Visualized results with actual vs predicted price plots. -Saved the trained model for future use or deployment.
ml_foundationsSecurity Report Generator
by Eishit Balyan
I built a security vulnerability dashboard that takes Nessus scanner CSV files and actually makes them useful. when you run a vulnerability scan you get a massive CSV dumped on you and figuring out what to fix first is genuinely painful. So the app ingests that file, parses out all the assets and vulnerabilities, stores everything in a proper database, and then automatically calculates a risk score for every asset. The formula multiplies each vulnerability's CVSS score by a severity weight and rolls it up by asset criticality — so you immediately know which machines are most exposed and why. On top of that it hits the National Vulnerability Database API in the background for every CVE it finds. So within about thirty seconds of uploading a scan you've got CWE classifications, official CVSS scores, patch availability, and full descriptions pulled from NVD automatically — no manual lookups. The frontend is a React dashboard. There's a risk ranking table where you can click any asset, see every CVE on it with direct NVD links, and adjust its criticality score in real time. Charts, severity breakdowns, scan history — all of it updates without page reloads. The feature I'm most proud of is the one-click PDF report generator. Click export and you get a proper multi-page PDF — branded cover page, executive summary written from actual data, severity charts, full findings breakdown, and auto-generated recommendations based on what was found. Something you could actually hand to a manager or auditor. Stack is FastAPI, SQLAlchemy, SQLite on the backend and React with plain CSS on the frontend.
linux_toolsTitanic Survival Predictor
by MALLESWARAPU SRIYA
I built a machine learning model that predicts whether a Titanic passenger would survive based on features like age, gender, passenger class, fare, and family information. The project involved data preprocessing, handling missing values, encoding categorical variables, and creating new features such as family size and passenger titles. I trained and compared multiple models including Logistic Regression, Decision Tree, Random Forest, and SVM, and selected Random Forest as the best-performing model after hyperparameter tuning. Finally, I deployed the model using a Streamlit web application where users can enter passenger details and get real-time survival predictions.
ml_sklearn
Auto-Scaling Web Service
by Kavinaya B S
Designed and deployed a highly available and scalable web application using Amazon EC2, Application Load Balancer, and Auto Scaling Groups with CPU-based target tracking policies. Implemented dynamic scaling to automatically increase instances during traffic spikes and reduce capacity during low demand, ensuring performance and cost optimization. Integrated CloudWatch monitoring and SNS notifications to track scaling events and system health across multiple Availability Zones.
aws_fundamentalsDiabetes Risk Assessment
by Tanuja Sandip Nalage
This project builds a Diabetes Risk Assessment system using machine learning on the Pima Indians Diabetes dataset. The workflow includes data cleaning of medically invalid zero values, median imputation, feature engineering (BMI categories, age groups, glucose levels), and standardization. Multiple classification models (Logistic Regression and Naive Bayes) are trained and evaluated using accuracy, precision, recall, F1-score, and confusion matrices. The project emphasizes recall as a critical metric for medical screening to minimize false negatives (missing diabetic patients). Health insights are derived from key risk factors such as glucose level, BMI, age, and insulin patterns. A trained model is saved and the complete implementation is documented with medical disclaimers and ethical considerations.
ml_sklearnHouse Price Predictor
by Anisha Shaw
This project focuses on predicting house prices using basic machine learning techniques in Python. It is designed to be easy to understand, especially for beginners learning data science and ML. Built a house price prediction system using Python and machine learning. -Performed Exploratory Data Analysis (EDA) to understand feature relationships. -Handled missing values and applied feature scaling & encoding. -Trained a Linear Regression model using scikit-learn. -Evaluated the model using RMSE, MAE, and R² score. -Visualized results with actual vs predicted price plots. -Saved the trained model for future use or deployment.
ml_foundationsSecurity Report Generator
by Eishit Balyan
I built a security vulnerability dashboard that takes Nessus scanner CSV files and actually makes them useful. when you run a vulnerability scan you get a massive CSV dumped on you and figuring out what to fix first is genuinely painful. So the app ingests that file, parses out all the assets and vulnerabilities, stores everything in a proper database, and then automatically calculates a risk score for every asset. The formula multiplies each vulnerability's CVSS score by a severity weight and rolls it up by asset criticality — so you immediately know which machines are most exposed and why. On top of that it hits the National Vulnerability Database API in the background for every CVE it finds. So within about thirty seconds of uploading a scan you've got CWE classifications, official CVSS scores, patch availability, and full descriptions pulled from NVD automatically — no manual lookups. The frontend is a React dashboard. There's a risk ranking table where you can click any asset, see every CVE on it with direct NVD links, and adjust its criticality score in real time. Charts, severity breakdowns, scan history — all of it updates without page reloads. The feature I'm most proud of is the one-click PDF report generator. Click export and you get a proper multi-page PDF — branded cover page, executive summary written from actual data, severity charts, full findings breakdown, and auto-generated recommendations based on what was found. Something you could actually hand to a manager or auditor. Stack is FastAPI, SQLAlchemy, SQLite on the backend and React with plain CSS on the frontend.
linux_toolsTitanic Survival Predictor
by MALLESWARAPU SRIYA
I built a machine learning model that predicts whether a Titanic passenger would survive based on features like age, gender, passenger class, fare, and family information. The project involved data preprocessing, handling missing values, encoding categorical variables, and creating new features such as family size and passenger titles. I trained and compared multiple models including Logistic Regression, Decision Tree, Random Forest, and SVM, and selected Random Forest as the best-performing model after hyperparameter tuning. Finally, I deployed the model using a Streamlit web application where users can enter passenger details and get real-time survival predictions.
ml_sklearn
Auto-Scaling Web Service
by Kavinaya B S
Designed and deployed a highly available and scalable web application using Amazon EC2, Application Load Balancer, and Auto Scaling Groups with CPU-based target tracking policies. Implemented dynamic scaling to automatically increase instances during traffic spikes and reduce capacity during low demand, ensuring performance and cost optimization. Integrated CloudWatch monitoring and SNS notifications to track scaling events and system health across multiple Availability Zones.
aws_fundamentalsDiabetes Risk Assessment
by Tanuja Sandip Nalage
This project builds a Diabetes Risk Assessment system using machine learning on the Pima Indians Diabetes dataset. The workflow includes data cleaning of medically invalid zero values, median imputation, feature engineering (BMI categories, age groups, glucose levels), and standardization. Multiple classification models (Logistic Regression and Naive Bayes) are trained and evaluated using accuracy, precision, recall, F1-score, and confusion matrices. The project emphasizes recall as a critical metric for medical screening to minimize false negatives (missing diabetic patients). Health insights are derived from key risk factors such as glucose level, BMI, age, and insulin patterns. A trained model is saved and the complete implementation is documented with medical disclaimers and ethical considerations.
ml_sklearnHouse Price Predictor
by Anisha Shaw
This project focuses on predicting house prices using basic machine learning techniques in Python. It is designed to be easy to understand, especially for beginners learning data science and ML. Built a house price prediction system using Python and machine learning. -Performed Exploratory Data Analysis (EDA) to understand feature relationships. -Handled missing values and applied feature scaling & encoding. -Trained a Linear Regression model using scikit-learn. -Evaluated the model using RMSE, MAE, and R² score. -Visualized results with actual vs predicted price plots. -Saved the trained model for future use or deployment.
ml_foundationsSecurity Report Generator
by Eishit Balyan
I built a security vulnerability dashboard that takes Nessus scanner CSV files and actually makes them useful. when you run a vulnerability scan you get a massive CSV dumped on you and figuring out what to fix first is genuinely painful. So the app ingests that file, parses out all the assets and vulnerabilities, stores everything in a proper database, and then automatically calculates a risk score for every asset. The formula multiplies each vulnerability's CVSS score by a severity weight and rolls it up by asset criticality — so you immediately know which machines are most exposed and why. On top of that it hits the National Vulnerability Database API in the background for every CVE it finds. So within about thirty seconds of uploading a scan you've got CWE classifications, official CVSS scores, patch availability, and full descriptions pulled from NVD automatically — no manual lookups. The frontend is a React dashboard. There's a risk ranking table where you can click any asset, see every CVE on it with direct NVD links, and adjust its criticality score in real time. Charts, severity breakdowns, scan history — all of it updates without page reloads. The feature I'm most proud of is the one-click PDF report generator. Click export and you get a proper multi-page PDF — branded cover page, executive summary written from actual data, severity charts, full findings breakdown, and auto-generated recommendations based on what was found. Something you could actually hand to a manager or auditor. Stack is FastAPI, SQLAlchemy, SQLite on the backend and React with plain CSS on the frontend.
linux_toolsTitanic Survival Predictor
by MALLESWARAPU SRIYA
I built a machine learning model that predicts whether a Titanic passenger would survive based on features like age, gender, passenger class, fare, and family information. The project involved data preprocessing, handling missing values, encoding categorical variables, and creating new features such as family size and passenger titles. I trained and compared multiple models including Logistic Regression, Decision Tree, Random Forest, and SVM, and selected Random Forest as the best-performing model after hyperparameter tuning. Finally, I deployed the model using a Streamlit web application where users can enter passenger details and get real-time survival predictions.
ml_sklearn
Auto-Scaling Web Service
by Kavinaya B S
Designed and deployed a highly available and scalable web application using Amazon EC2, Application Load Balancer, and Auto Scaling Groups with CPU-based target tracking policies. Implemented dynamic scaling to automatically increase instances during traffic spikes and reduce capacity during low demand, ensuring performance and cost optimization. Integrated CloudWatch monitoring and SNS notifications to track scaling events and system health across multiple Availability Zones.
aws_fundamentalsDiabetes Risk Assessment
by Tanuja Sandip Nalage
This project builds a Diabetes Risk Assessment system using machine learning on the Pima Indians Diabetes dataset. The workflow includes data cleaning of medically invalid zero values, median imputation, feature engineering (BMI categories, age groups, glucose levels), and standardization. Multiple classification models (Logistic Regression and Naive Bayes) are trained and evaluated using accuracy, precision, recall, F1-score, and confusion matrices. The project emphasizes recall as a critical metric for medical screening to minimize false negatives (missing diabetic patients). Health insights are derived from key risk factors such as glucose level, BMI, age, and insulin patterns. A trained model is saved and the complete implementation is documented with medical disclaimers and ethical considerations.
ml_sklearnHouse Price Predictor
by Anisha Shaw
This project focuses on predicting house prices using basic machine learning techniques in Python. It is designed to be easy to understand, especially for beginners learning data science and ML. Built a house price prediction system using Python and machine learning. -Performed Exploratory Data Analysis (EDA) to understand feature relationships. -Handled missing values and applied feature scaling & encoding. -Trained a Linear Regression model using scikit-learn. -Evaluated the model using RMSE, MAE, and R² score. -Visualized results with actual vs predicted price plots. -Saved the trained model for future use or deployment.
ml_foundationsSecurity Report Generator
by Eishit Balyan
I built a security vulnerability dashboard that takes Nessus scanner CSV files and actually makes them useful. when you run a vulnerability scan you get a massive CSV dumped on you and figuring out what to fix first is genuinely painful. So the app ingests that file, parses out all the assets and vulnerabilities, stores everything in a proper database, and then automatically calculates a risk score for every asset. The formula multiplies each vulnerability's CVSS score by a severity weight and rolls it up by asset criticality — so you immediately know which machines are most exposed and why. On top of that it hits the National Vulnerability Database API in the background for every CVE it finds. So within about thirty seconds of uploading a scan you've got CWE classifications, official CVSS scores, patch availability, and full descriptions pulled from NVD automatically — no manual lookups. The frontend is a React dashboard. There's a risk ranking table where you can click any asset, see every CVE on it with direct NVD links, and adjust its criticality score in real time. Charts, severity breakdowns, scan history — all of it updates without page reloads. The feature I'm most proud of is the one-click PDF report generator. Click export and you get a proper multi-page PDF — branded cover page, executive summary written from actual data, severity charts, full findings breakdown, and auto-generated recommendations based on what was found. Something you could actually hand to a manager or auditor. Stack is FastAPI, SQLAlchemy, SQLite on the backend and React with plain CSS on the frontend.
linux_toolsTitanic Survival Predictor
by MALLESWARAPU SRIYA
I built a machine learning model that predicts whether a Titanic passenger would survive based on features like age, gender, passenger class, fare, and family information. The project involved data preprocessing, handling missing values, encoding categorical variables, and creating new features such as family size and passenger titles. I trained and compared multiple models including Logistic Regression, Decision Tree, Random Forest, and SVM, and selected Random Forest as the best-performing model after hyperparameter tuning. Finally, I deployed the model using a Streamlit web application where users can enter passenger details and get real-time survival predictions.
ml_sklearn
Auto-Scaling Web Service
by Kavinaya B S
Designed and deployed a highly available and scalable web application using Amazon EC2, Application Load Balancer, and Auto Scaling Groups with CPU-based target tracking policies. Implemented dynamic scaling to automatically increase instances during traffic spikes and reduce capacity during low demand, ensuring performance and cost optimization. Integrated CloudWatch monitoring and SNS notifications to track scaling events and system health across multiple Availability Zones.
aws_fundamentalsDiabetes Risk Assessment
by Tanuja Sandip Nalage
This project builds a Diabetes Risk Assessment system using machine learning on the Pima Indians Diabetes dataset. The workflow includes data cleaning of medically invalid zero values, median imputation, feature engineering (BMI categories, age groups, glucose levels), and standardization. Multiple classification models (Logistic Regression and Naive Bayes) are trained and evaluated using accuracy, precision, recall, F1-score, and confusion matrices. The project emphasizes recall as a critical metric for medical screening to minimize false negatives (missing diabetic patients). Health insights are derived from key risk factors such as glucose level, BMI, age, and insulin patterns. A trained model is saved and the complete implementation is documented with medical disclaimers and ethical considerations.
ml_sklearnHouse Price Predictor
by Anisha Shaw
This project focuses on predicting house prices using basic machine learning techniques in Python. It is designed to be easy to understand, especially for beginners learning data science and ML. Built a house price prediction system using Python and machine learning. -Performed Exploratory Data Analysis (EDA) to understand feature relationships. -Handled missing values and applied feature scaling & encoding. -Trained a Linear Regression model using scikit-learn. -Evaluated the model using RMSE, MAE, and R² score. -Visualized results with actual vs predicted price plots. -Saved the trained model for future use or deployment.
ml_foundationsSecurity Report Generator
by Eishit Balyan
I built a security vulnerability dashboard that takes Nessus scanner CSV files and actually makes them useful. when you run a vulnerability scan you get a massive CSV dumped on you and figuring out what to fix first is genuinely painful. So the app ingests that file, parses out all the assets and vulnerabilities, stores everything in a proper database, and then automatically calculates a risk score for every asset. The formula multiplies each vulnerability's CVSS score by a severity weight and rolls it up by asset criticality — so you immediately know which machines are most exposed and why. On top of that it hits the National Vulnerability Database API in the background for every CVE it finds. So within about thirty seconds of uploading a scan you've got CWE classifications, official CVSS scores, patch availability, and full descriptions pulled from NVD automatically — no manual lookups. The frontend is a React dashboard. There's a risk ranking table where you can click any asset, see every CVE on it with direct NVD links, and adjust its criticality score in real time. Charts, severity breakdowns, scan history — all of it updates without page reloads. The feature I'm most proud of is the one-click PDF report generator. Click export and you get a proper multi-page PDF — branded cover page, executive summary written from actual data, severity charts, full findings breakdown, and auto-generated recommendations based on what was found. Something you could actually hand to a manager or auditor. Stack is FastAPI, SQLAlchemy, SQLite on the backend and React with plain CSS on the frontend.
linux_toolsTitanic Survival Predictor
by MALLESWARAPU SRIYA
I built a machine learning model that predicts whether a Titanic passenger would survive based on features like age, gender, passenger class, fare, and family information. The project involved data preprocessing, handling missing values, encoding categorical variables, and creating new features such as family size and passenger titles. I trained and compared multiple models including Logistic Regression, Decision Tree, Random Forest, and SVM, and selected Random Forest as the best-performing model after hyperparameter tuning. Finally, I deployed the model using a Streamlit web application where users can enter passenger details and get real-time survival predictions.
ml_sklearn
Auto-Scaling Web Service
by Kavinaya B S
Designed and deployed a highly available and scalable web application using Amazon EC2, Application Load Balancer, and Auto Scaling Groups with CPU-based target tracking policies. Implemented dynamic scaling to automatically increase instances during traffic spikes and reduce capacity during low demand, ensuring performance and cost optimization. Integrated CloudWatch monitoring and SNS notifications to track scaling events and system health across multiple Availability Zones.
aws_fundamentalsDiabetes Risk Assessment
by Tanuja Sandip Nalage
This project builds a Diabetes Risk Assessment system using machine learning on the Pima Indians Diabetes dataset. The workflow includes data cleaning of medically invalid zero values, median imputation, feature engineering (BMI categories, age groups, glucose levels), and standardization. Multiple classification models (Logistic Regression and Naive Bayes) are trained and evaluated using accuracy, precision, recall, F1-score, and confusion matrices. The project emphasizes recall as a critical metric for medical screening to minimize false negatives (missing diabetic patients). Health insights are derived from key risk factors such as glucose level, BMI, age, and insulin patterns. A trained model is saved and the complete implementation is documented with medical disclaimers and ethical considerations.
ml_sklearn