
International Quant Championship 2026 🖤💰 Win $100,000 In Cash Prizes
A global competition where you don’t just participate, you build, compete, and potentially earn while entering one of the highest-paying domains in tech.
From free courses and internships to DSA practice and events - everything you need to learn, grow, and succeed.
Access free Engineering, Medical, Board & Govt exam courses. Learn at your own pace with structured content and expert guidance.
Discover curated internship opportunities across tech, design, marketing and more. Grow your career with real-world experience.
Sharpen your problem-solving skills with curated Data Structures & Algorithms problems. Ace coding interviews at top companies.
Join coding contests, open-source drives, Google Arcade/Skills campaigns and community events. Build, compete & collaborate.
Contribute to real open-source projects, earn XP(Points) through merged PRs, and climb the Leaderboard.
Docs, typos, minor fixes
Bug fixes, new components
Architecture, core refactors
// CONTRIBUTION PIPELINE
Don't just learn. Build. Our remote internship program bridges the gap between classroom theory and industry reality.
Real projects shipped by EduLinkUp interns - live, open-source, and production-ready.
by Anisha Shaw
This project focuses on predicting house prices using basic machine learning techniques in Python. It is designed to be easy to understand, especially for beginners learning data science and ML. Built a house price prediction system using Python and machine learning. -Performed Exploratory Data Analysis (EDA) to understand feature relationships. -Handled missing values and applied feature scaling & encoding. -Trained a Linear Regression model using scikit-learn. -Evaluated the model using RMSE, MAE, and R² score. -Visualized results with actual vs predicted price plots. -Saved the trained model for future use or deployment.
ml_foundationsby Eishit Balyan
I built a security vulnerability dashboard that takes Nessus scanner CSV files and actually makes them useful. when you run a vulnerability scan you get a massive CSV dumped on you and figuring out what to fix first is genuinely painful. So the app ingests that file, parses out all the assets and vulnerabilities, stores everything in a proper database, and then automatically calculates a risk score for every asset. The formula multiplies each vulnerability's CVSS score by a severity weight and rolls it up by asset criticality — so you immediately know which machines are most exposed and why. On top of that it hits the National Vulnerability Database API in the background for every CVE it finds. So within about thirty seconds of uploading a scan you've got CWE classifications, official CVSS scores, patch availability, and full descriptions pulled from NVD automatically — no manual lookups. The frontend is a React dashboard. There's a risk ranking table where you can click any asset, see every CVE on it with direct NVD links, and adjust its criticality score in real time. Charts, severity breakdowns, scan history — all of it updates without page reloads. The feature I'm most proud of is the one-click PDF report generator. Click export and you get a proper multi-page PDF — branded cover page, executive summary written from actual data, severity charts, full findings breakdown, and auto-generated recommendations based on what was found. Something you could actually hand to a manager or auditor. Stack is FastAPI, SQLAlchemy, SQLite on the backend and React with plain CSS on the frontend.
linux_toolsby MALLESWARAPU SRIYA
I built a machine learning model that predicts whether a Titanic passenger would survive based on features like age, gender, passenger class, fare, and family information. The project involved data preprocessing, handling missing values, encoding categorical variables, and creating new features such as family size and passenger titles. I trained and compared multiple models including Logistic Regression, Decision Tree, Random Forest, and SVM, and selected Random Forest as the best-performing model after hyperparameter tuning. Finally, I deployed the model using a Streamlit web application where users can enter passenger details and get real-time survival predictions.
ml_sklearn
by Kavinaya B S
Designed and deployed a highly available and scalable web application using Amazon EC2, Application Load Balancer, and Auto Scaling Groups with CPU-based target tracking policies. Implemented dynamic scaling to automatically increase instances during traffic spikes and reduce capacity during low demand, ensuring performance and cost optimization. Integrated CloudWatch monitoring and SNS notifications to track scaling events and system health across multiple Availability Zones.
aws_fundamentalsby Tanuja Sandip Nalage
This project builds a Diabetes Risk Assessment system using machine learning on the Pima Indians Diabetes dataset. The workflow includes data cleaning of medically invalid zero values, median imputation, feature engineering (BMI categories, age groups, glucose levels), and standardization. Multiple classification models (Logistic Regression and Naive Bayes) are trained and evaluated using accuracy, precision, recall, F1-score, and confusion matrices. The project emphasizes recall as a critical metric for medical screening to minimize false negatives (missing diabetic patients). Health insights are derived from key risk factors such as glucose level, BMI, age, and insulin patterns. A trained model is saved and the complete implementation is documented with medical disclaimers and ethical considerations.
ml_sklearnby Anisha Shaw
This project focuses on predicting house prices using basic machine learning techniques in Python. It is designed to be easy to understand, especially for beginners learning data science and ML. Built a house price prediction system using Python and machine learning. -Performed Exploratory Data Analysis (EDA) to understand feature relationships. -Handled missing values and applied feature scaling & encoding. -Trained a Linear Regression model using scikit-learn. -Evaluated the model using RMSE, MAE, and R² score. -Visualized results with actual vs predicted price plots. -Saved the trained model for future use or deployment.
ml_foundationsby Eishit Balyan
I built a security vulnerability dashboard that takes Nessus scanner CSV files and actually makes them useful. when you run a vulnerability scan you get a massive CSV dumped on you and figuring out what to fix first is genuinely painful. So the app ingests that file, parses out all the assets and vulnerabilities, stores everything in a proper database, and then automatically calculates a risk score for every asset. The formula multiplies each vulnerability's CVSS score by a severity weight and rolls it up by asset criticality — so you immediately know which machines are most exposed and why. On top of that it hits the National Vulnerability Database API in the background for every CVE it finds. So within about thirty seconds of uploading a scan you've got CWE classifications, official CVSS scores, patch availability, and full descriptions pulled from NVD automatically — no manual lookups. The frontend is a React dashboard. There's a risk ranking table where you can click any asset, see every CVE on it with direct NVD links, and adjust its criticality score in real time. Charts, severity breakdowns, scan history — all of it updates without page reloads. The feature I'm most proud of is the one-click PDF report generator. Click export and you get a proper multi-page PDF — branded cover page, executive summary written from actual data, severity charts, full findings breakdown, and auto-generated recommendations based on what was found. Something you could actually hand to a manager or auditor. Stack is FastAPI, SQLAlchemy, SQLite on the backend and React with plain CSS on the frontend.
linux_toolsby MALLESWARAPU SRIYA
I built a machine learning model that predicts whether a Titanic passenger would survive based on features like age, gender, passenger class, fare, and family information. The project involved data preprocessing, handling missing values, encoding categorical variables, and creating new features such as family size and passenger titles. I trained and compared multiple models including Logistic Regression, Decision Tree, Random Forest, and SVM, and selected Random Forest as the best-performing model after hyperparameter tuning. Finally, I deployed the model using a Streamlit web application where users can enter passenger details and get real-time survival predictions.
ml_sklearn
by Kavinaya B S
Designed and deployed a highly available and scalable web application using Amazon EC2, Application Load Balancer, and Auto Scaling Groups with CPU-based target tracking policies. Implemented dynamic scaling to automatically increase instances during traffic spikes and reduce capacity during low demand, ensuring performance and cost optimization. Integrated CloudWatch monitoring and SNS notifications to track scaling events and system health across multiple Availability Zones.
aws_fundamentalsby Tanuja Sandip Nalage
This project builds a Diabetes Risk Assessment system using machine learning on the Pima Indians Diabetes dataset. The workflow includes data cleaning of medically invalid zero values, median imputation, feature engineering (BMI categories, age groups, glucose levels), and standardization. Multiple classification models (Logistic Regression and Naive Bayes) are trained and evaluated using accuracy, precision, recall, F1-score, and confusion matrices. The project emphasizes recall as a critical metric for medical screening to minimize false negatives (missing diabetic patients). Health insights are derived from key risk factors such as glucose level, BMI, age, and insulin patterns. A trained model is saved and the complete implementation is documented with medical disclaimers and ethical considerations.
ml_sklearnby Anisha Shaw
This project focuses on predicting house prices using basic machine learning techniques in Python. It is designed to be easy to understand, especially for beginners learning data science and ML. Built a house price prediction system using Python and machine learning. -Performed Exploratory Data Analysis (EDA) to understand feature relationships. -Handled missing values and applied feature scaling & encoding. -Trained a Linear Regression model using scikit-learn. -Evaluated the model using RMSE, MAE, and R² score. -Visualized results with actual vs predicted price plots. -Saved the trained model for future use or deployment.
ml_foundationsby Eishit Balyan
I built a security vulnerability dashboard that takes Nessus scanner CSV files and actually makes them useful. when you run a vulnerability scan you get a massive CSV dumped on you and figuring out what to fix first is genuinely painful. So the app ingests that file, parses out all the assets and vulnerabilities, stores everything in a proper database, and then automatically calculates a risk score for every asset. The formula multiplies each vulnerability's CVSS score by a severity weight and rolls it up by asset criticality — so you immediately know which machines are most exposed and why. On top of that it hits the National Vulnerability Database API in the background for every CVE it finds. So within about thirty seconds of uploading a scan you've got CWE classifications, official CVSS scores, patch availability, and full descriptions pulled from NVD automatically — no manual lookups. The frontend is a React dashboard. There's a risk ranking table where you can click any asset, see every CVE on it with direct NVD links, and adjust its criticality score in real time. Charts, severity breakdowns, scan history — all of it updates without page reloads. The feature I'm most proud of is the one-click PDF report generator. Click export and you get a proper multi-page PDF — branded cover page, executive summary written from actual data, severity charts, full findings breakdown, and auto-generated recommendations based on what was found. Something you could actually hand to a manager or auditor. Stack is FastAPI, SQLAlchemy, SQLite on the backend and React with plain CSS on the frontend.
linux_toolsby MALLESWARAPU SRIYA
I built a machine learning model that predicts whether a Titanic passenger would survive based on features like age, gender, passenger class, fare, and family information. The project involved data preprocessing, handling missing values, encoding categorical variables, and creating new features such as family size and passenger titles. I trained and compared multiple models including Logistic Regression, Decision Tree, Random Forest, and SVM, and selected Random Forest as the best-performing model after hyperparameter tuning. Finally, I deployed the model using a Streamlit web application where users can enter passenger details and get real-time survival predictions.
ml_sklearn
by Kavinaya B S
Designed and deployed a highly available and scalable web application using Amazon EC2, Application Load Balancer, and Auto Scaling Groups with CPU-based target tracking policies. Implemented dynamic scaling to automatically increase instances during traffic spikes and reduce capacity during low demand, ensuring performance and cost optimization. Integrated CloudWatch monitoring and SNS notifications to track scaling events and system health across multiple Availability Zones.
aws_fundamentalsby Tanuja Sandip Nalage
This project builds a Diabetes Risk Assessment system using machine learning on the Pima Indians Diabetes dataset. The workflow includes data cleaning of medically invalid zero values, median imputation, feature engineering (BMI categories, age groups, glucose levels), and standardization. Multiple classification models (Logistic Regression and Naive Bayes) are trained and evaluated using accuracy, precision, recall, F1-score, and confusion matrices. The project emphasizes recall as a critical metric for medical screening to minimize false negatives (missing diabetic patients). Health insights are derived from key risk factors such as glucose level, BMI, age, and insulin patterns. A trained model is saved and the complete implementation is documented with medical disclaimers and ethical considerations.
ml_sklearnby Anisha Shaw
This project focuses on predicting house prices using basic machine learning techniques in Python. It is designed to be easy to understand, especially for beginners learning data science and ML. Built a house price prediction system using Python and machine learning. -Performed Exploratory Data Analysis (EDA) to understand feature relationships. -Handled missing values and applied feature scaling & encoding. -Trained a Linear Regression model using scikit-learn. -Evaluated the model using RMSE, MAE, and R² score. -Visualized results with actual vs predicted price plots. -Saved the trained model for future use or deployment.
ml_foundationsby Eishit Balyan
I built a security vulnerability dashboard that takes Nessus scanner CSV files and actually makes them useful. when you run a vulnerability scan you get a massive CSV dumped on you and figuring out what to fix first is genuinely painful. So the app ingests that file, parses out all the assets and vulnerabilities, stores everything in a proper database, and then automatically calculates a risk score for every asset. The formula multiplies each vulnerability's CVSS score by a severity weight and rolls it up by asset criticality — so you immediately know which machines are most exposed and why. On top of that it hits the National Vulnerability Database API in the background for every CVE it finds. So within about thirty seconds of uploading a scan you've got CWE classifications, official CVSS scores, patch availability, and full descriptions pulled from NVD automatically — no manual lookups. The frontend is a React dashboard. There's a risk ranking table where you can click any asset, see every CVE on it with direct NVD links, and adjust its criticality score in real time. Charts, severity breakdowns, scan history — all of it updates without page reloads. The feature I'm most proud of is the one-click PDF report generator. Click export and you get a proper multi-page PDF — branded cover page, executive summary written from actual data, severity charts, full findings breakdown, and auto-generated recommendations based on what was found. Something you could actually hand to a manager or auditor. Stack is FastAPI, SQLAlchemy, SQLite on the backend and React with plain CSS on the frontend.
linux_toolsby MALLESWARAPU SRIYA
I built a machine learning model that predicts whether a Titanic passenger would survive based on features like age, gender, passenger class, fare, and family information. The project involved data preprocessing, handling missing values, encoding categorical variables, and creating new features such as family size and passenger titles. I trained and compared multiple models including Logistic Regression, Decision Tree, Random Forest, and SVM, and selected Random Forest as the best-performing model after hyperparameter tuning. Finally, I deployed the model using a Streamlit web application where users can enter passenger details and get real-time survival predictions.
ml_sklearn
by Kavinaya B S
Designed and deployed a highly available and scalable web application using Amazon EC2, Application Load Balancer, and Auto Scaling Groups with CPU-based target tracking policies. Implemented dynamic scaling to automatically increase instances during traffic spikes and reduce capacity during low demand, ensuring performance and cost optimization. Integrated CloudWatch monitoring and SNS notifications to track scaling events and system health across multiple Availability Zones.
aws_fundamentalsby Tanuja Sandip Nalage
This project builds a Diabetes Risk Assessment system using machine learning on the Pima Indians Diabetes dataset. The workflow includes data cleaning of medically invalid zero values, median imputation, feature engineering (BMI categories, age groups, glucose levels), and standardization. Multiple classification models (Logistic Regression and Naive Bayes) are trained and evaluated using accuracy, precision, recall, F1-score, and confusion matrices. The project emphasizes recall as a critical metric for medical screening to minimize false negatives (missing diabetic patients). Health insights are derived from key risk factors such as glucose level, BMI, age, and insulin patterns. A trained model is saved and the complete implementation is documented with medical disclaimers and ethical considerations.
ml_sklearnby Anisha Shaw
This project focuses on predicting house prices using basic machine learning techniques in Python. It is designed to be easy to understand, especially for beginners learning data science and ML. Built a house price prediction system using Python and machine learning. -Performed Exploratory Data Analysis (EDA) to understand feature relationships. -Handled missing values and applied feature scaling & encoding. -Trained a Linear Regression model using scikit-learn. -Evaluated the model using RMSE, MAE, and R² score. -Visualized results with actual vs predicted price plots. -Saved the trained model for future use or deployment.
ml_foundationsby Eishit Balyan
I built a security vulnerability dashboard that takes Nessus scanner CSV files and actually makes them useful. when you run a vulnerability scan you get a massive CSV dumped on you and figuring out what to fix first is genuinely painful. So the app ingests that file, parses out all the assets and vulnerabilities, stores everything in a proper database, and then automatically calculates a risk score for every asset. The formula multiplies each vulnerability's CVSS score by a severity weight and rolls it up by asset criticality — so you immediately know which machines are most exposed and why. On top of that it hits the National Vulnerability Database API in the background for every CVE it finds. So within about thirty seconds of uploading a scan you've got CWE classifications, official CVSS scores, patch availability, and full descriptions pulled from NVD automatically — no manual lookups. The frontend is a React dashboard. There's a risk ranking table where you can click any asset, see every CVE on it with direct NVD links, and adjust its criticality score in real time. Charts, severity breakdowns, scan history — all of it updates without page reloads. The feature I'm most proud of is the one-click PDF report generator. Click export and you get a proper multi-page PDF — branded cover page, executive summary written from actual data, severity charts, full findings breakdown, and auto-generated recommendations based on what was found. Something you could actually hand to a manager or auditor. Stack is FastAPI, SQLAlchemy, SQLite on the backend and React with plain CSS on the frontend.
linux_toolsby MALLESWARAPU SRIYA
I built a machine learning model that predicts whether a Titanic passenger would survive based on features like age, gender, passenger class, fare, and family information. The project involved data preprocessing, handling missing values, encoding categorical variables, and creating new features such as family size and passenger titles. I trained and compared multiple models including Logistic Regression, Decision Tree, Random Forest, and SVM, and selected Random Forest as the best-performing model after hyperparameter tuning. Finally, I deployed the model using a Streamlit web application where users can enter passenger details and get real-time survival predictions.
ml_sklearn
by Kavinaya B S
Designed and deployed a highly available and scalable web application using Amazon EC2, Application Load Balancer, and Auto Scaling Groups with CPU-based target tracking policies. Implemented dynamic scaling to automatically increase instances during traffic spikes and reduce capacity during low demand, ensuring performance and cost optimization. Integrated CloudWatch monitoring and SNS notifications to track scaling events and system health across multiple Availability Zones.
aws_fundamentalsby Tanuja Sandip Nalage
This project builds a Diabetes Risk Assessment system using machine learning on the Pima Indians Diabetes dataset. The workflow includes data cleaning of medically invalid zero values, median imputation, feature engineering (BMI categories, age groups, glucose levels), and standardization. Multiple classification models (Logistic Regression and Naive Bayes) are trained and evaluated using accuracy, precision, recall, F1-score, and confusion matrices. The project emphasizes recall as a critical metric for medical screening to minimize false negatives (missing diabetic patients). Health insights are derived from key risk factors such as glucose level, BMI, age, and insulin patterns. A trained model is saved and the complete implementation is documented with medical disclaimers and ethical considerations.
ml_sklearnby Anisha Shaw
This project focuses on predicting house prices using basic machine learning techniques in Python. It is designed to be easy to understand, especially for beginners learning data science and ML. Built a house price prediction system using Python and machine learning. -Performed Exploratory Data Analysis (EDA) to understand feature relationships. -Handled missing values and applied feature scaling & encoding. -Trained a Linear Regression model using scikit-learn. -Evaluated the model using RMSE, MAE, and R² score. -Visualized results with actual vs predicted price plots. -Saved the trained model for future use or deployment.
ml_foundationsby Eishit Balyan
I built a security vulnerability dashboard that takes Nessus scanner CSV files and actually makes them useful. when you run a vulnerability scan you get a massive CSV dumped on you and figuring out what to fix first is genuinely painful. So the app ingests that file, parses out all the assets and vulnerabilities, stores everything in a proper database, and then automatically calculates a risk score for every asset. The formula multiplies each vulnerability's CVSS score by a severity weight and rolls it up by asset criticality — so you immediately know which machines are most exposed and why. On top of that it hits the National Vulnerability Database API in the background for every CVE it finds. So within about thirty seconds of uploading a scan you've got CWE classifications, official CVSS scores, patch availability, and full descriptions pulled from NVD automatically — no manual lookups. The frontend is a React dashboard. There's a risk ranking table where you can click any asset, see every CVE on it with direct NVD links, and adjust its criticality score in real time. Charts, severity breakdowns, scan history — all of it updates without page reloads. The feature I'm most proud of is the one-click PDF report generator. Click export and you get a proper multi-page PDF — branded cover page, executive summary written from actual data, severity charts, full findings breakdown, and auto-generated recommendations based on what was found. Something you could actually hand to a manager or auditor. Stack is FastAPI, SQLAlchemy, SQLite on the backend and React with plain CSS on the frontend.
linux_toolsby MALLESWARAPU SRIYA
I built a machine learning model that predicts whether a Titanic passenger would survive based on features like age, gender, passenger class, fare, and family information. The project involved data preprocessing, handling missing values, encoding categorical variables, and creating new features such as family size and passenger titles. I trained and compared multiple models including Logistic Regression, Decision Tree, Random Forest, and SVM, and selected Random Forest as the best-performing model after hyperparameter tuning. Finally, I deployed the model using a Streamlit web application where users can enter passenger details and get real-time survival predictions.
ml_sklearn
by Kavinaya B S
Designed and deployed a highly available and scalable web application using Amazon EC2, Application Load Balancer, and Auto Scaling Groups with CPU-based target tracking policies. Implemented dynamic scaling to automatically increase instances during traffic spikes and reduce capacity during low demand, ensuring performance and cost optimization. Integrated CloudWatch monitoring and SNS notifications to track scaling events and system health across multiple Availability Zones.
aws_fundamentalsby Tanuja Sandip Nalage
This project builds a Diabetes Risk Assessment system using machine learning on the Pima Indians Diabetes dataset. The workflow includes data cleaning of medically invalid zero values, median imputation, feature engineering (BMI categories, age groups, glucose levels), and standardization. Multiple classification models (Logistic Regression and Naive Bayes) are trained and evaluated using accuracy, precision, recall, F1-score, and confusion matrices. The project emphasizes recall as a critical metric for medical screening to minimize false negatives (missing diabetic patients). Health insights are derived from key risk factors such as glucose level, BMI, age, and insulin patterns. A trained model is saved and the complete implementation is documented with medical disclaimers and ethical considerations.
ml_sklearnLevel up your cloud skills with the Google Cloud Arcade. Sync your profile, earn digital badges, and redeem exclusive points for premium swags.
Connect your Google Cloud Skills Boost profile to track your arcade points in real-time.
Explore the different prize tiers and plan your way to the legendary Arcade rewards.
Learn how to earn bonus points through our facilitator programs and milestones.

Master Python from basics to advanced concepts. This comprehensive course covers everything from variables and data types to advanced topics like decorators, generators, and web scraping. Build real-world projects and prepare for technical interviews.

Comprehensive NEET biology preparation with detailed explanations, animations, and 1000+ practice questions covering all topics from NCERT for Class 11 and 12.

Start your UPSC journey with our comprehensive foundation course covering all subjects including History, Geography, Polity, Economy, Environment, and Current Affairs with expert guidance.

Crack JEE Mains and Advanced with our expert-led mathematics course covering all topics including calculus, algebra, coordinate geometry, vectors, and 3D geometry with shortcuts and problem-solving techniques.
Discover the latest insights, tutorials, and stories from the EduLinkUp team.

A global competition where you don’t just participate, you build, compete, and potentially earn while entering one of the highest-paying domains in tech.

EduLinkUp Summer of Code 2026 ~ lovingly known as ELUSOC ~ is a 2-month open-source initiative crafted to unite developers across the globe. Organised by EduLinkUp, this program isn't a hackathon, it's not a competition in the traditional sense ~ it's a transformative journey into the heart of collaborative software development. Whether you're a first-time contributor nervously writing your debut pull request, or a seasoned engineer looking to give back ~ ELUSOC is your arena.

Claude Code is powerful — but the API bills add up fast. Here's how to run it completely free using Ollama, with local open-source models or free cloud tiers. No hacks, no API key, just 5 minutes of setup.
Connect with fellow learners, share resources, and grow together.
Collaboration is at the heart of EduLinkUp.
Ask questions, share knowledge, and learn from peers in our vibrant community.
Access curated video lessons, technical documentation, and comprehensive roadmap guides.
Get your doubts cleared by experienced mentors and industry experts.