DeepSeek: Real Breakthrough, DeepFake or National Security Threat?

“In the midst of chaos, there is also opportunity” – Sun Tzu, The Art of War – 5th Century BCE Today marks a significant milestone in the AI as DeepSeek, a Chinese AI startup, announced the release of its revolutionary R1 Open-Source large language model (LLM) rivalling OpenAI’s ChatGPT. This DeepSeek RI model has been designed to excel in complex reasoning tasks, rivaling the performance of OpenAI’s latest models while reportedly being developed at a fraction of the training & implementation cost. It is being widely reported that this R1 LLM was trained with Reinforcement Learning (RL) for a

Beware of Human-injected left-leaning bias emanating from AI Large Language Models (LLM) Outputs – RLHF technique could be the misused

In the realm of Machine Learning, Reinforcement Learning with Human Feedback (RLHF) stands out as an innovative technique where human trainers play a crucial role in guiding the learning process of models. Unlike traditional reinforcement learning, which relies solely on pre-defined rewards, RLHF incorporates human judgment to shape the training environment. This method can have significant implications, especially when it comes to ensuring that models consistently favor certain outcomes over others. In this blog, we’ll delve into how trainers can influence models using RLHF, highlighting both the potential benefits and pitfalls. Human trainers can introduce biases, whether consciously or

Personal Privacy – A Mirage in today’s Tech World? Renewed call to the incoming administration to protect us!

Are we giving up our privacy for convenience without thinking about the consequences? It seems that we all desire data privacy, but our actions often seem to indicate otherwise! We have always been calling for users to be careful of what they post online. My kids are familiar with my saying to them all the time “think before posting / texting / tweeting anything online… because once you do, it will stay there forever!”. It is our view that people do not fully appreciate how pervasive our online digital footprint is and the amount of information (related every aspect

Some quick steps to overcome Bias and institute Fairness in Machine Learning Models

We are seeing that bias in Machine Learnings Models can be a big issue since the Data available to train these models can be biased. Consequently, using biased Machine Learning Systems can be dangerous when it becomes the basis to make decisions about humans automatically, with no human oversight, resulting in biased outcomes in fields of Employment and Loans. Similarly, another area of concern is ML Models that are being used for Political Reporting with significant “left wing” bias and publishing Reports and Stories with a left leaning slant, which makes the current political divide more pronounced. Putting this