UC Berkeley Sutardja Center https://scet.berkeley.edu/ Thu, 30 May 2024 06:55:44 +0000 en-US hourly 1 https://wordpress.org/?v=6.5.3 https://scet.berkeley.edu/wp-content/uploads/cropped-21687920_1557226411024745_5027852822475167503_n-32x32.jpg UC Berkeley Sutardja Center https://scet.berkeley.edu/ 32 32 Bootcamp to Business Dream: SCET Alum returns to Berkeley as CPG Founder/CEO https://scet.berkeley.edu/bootcamp-to-business-dream-scet-alum-returns-to-berkeley-as-cpg-founder-ceo/ Tue, 21 May 2024 19:42:24 +0000 https://scet.berkeley.edu/?page_id=26196 Kashish Juneja, a woman in a pink suit, stands centered and speaking in the direction of the camera. The backs of two students, are seen at the edges of the frame as they listen to Kashish.Just three blocks away from Embarcadero Bart Station at 101 Spear, an SCET Bootcamp alum is building her dream business. Meet Kashish Juneja, the visionary entrepreneur and tenacious Cal grad whose dorm room drink experiments flourished into AURA, a high quality health-conscious boba brand.   Her journey began when she enrolled in Berkeley Bootcamp with an…

The post Bootcamp to Business Dream: SCET Alum returns to Berkeley as CPG Founder/CEO appeared first on UC Berkeley Sutardja Center.

]]>

Just three blocks away from Embarcadero Bart Station at 101 Spear, an SCET Bootcamp alum is building her dream business. Meet Kashish Juneja, the visionary entrepreneur and tenacious Cal grad whose dorm room drink experiments flourished into AURA, a high quality health-conscious boba brand.  

Her journey began when she enrolled in Berkeley Bootcamp with an idea forming in her head. 

Reflecting on her time in Bootcamp, Kashish credits it with the realization that she wanted to pursue entrepreneurship. It provided the framework to structure her vision and transform it into a compelling pitch. Amidst developing her product at the time, Clutch, a B2B delivery app, she experienced the roller coaster of highs and lows inherent in startup life – from identifying a problem to finding a team to prototyping. She loved and embraced both the triumphs and setbacks. 

AURA was created during the era of Zoom university, during the final two years of her degree. Empowered by the flexibility of remote learning, Kashish would turn off her camera and experiment with concocting new beverages in her dorm room. 

From her dorm room, Kashish took AURA to the next level by introducing it to the campus community. She set up shop in her dorm room, and began selling drinks and hosting blind taste testing for students.

Today, Kashish is excited to share the mission behind AURA. “You are what you eat,” she says, and “food impacts your mentality.” Growing up, Kashish had a complex relationship with food. She wanted to create something that was health conscious, but didn’t compromise on taste. 

Kashish Juneja pours a small bottle of orange AURA drink into one of many empty clear glasses set up on a table. Four students watch while pouring their own AURA drinks into glasses.
Kashish Juneja sharing her AURA products with students

Beyond the beverage itself is a deeper commitment to the environment of joy and family values that she is creating around the brand. At her San Francisco location, she offers latte art classes and mentorship workshops, cultivating an inclusive space for all.

Kashish recently returned to SCET as a mentor, serving on a panel for ENGIN 183D Product Management, where she and other panelists shared insights on leading through influence and collaborating cross-functionally. 

Kashish wants aspiring entrepreneurs to know that she is a resource to them. Her advice is to take a leap of faith as she did. “Go for it. Obsess over solving the problem. Often school can tell us to focus on intelligence, but being in tune with our emotions brings us happiness,” she advises, “and whoever the customer is will see that.”

Kashish Juneja sits at a table, speaking to a group of six students who are taking notes on laptops and paper.
Kashish Juneja mentoring a Product Management student team

While AURA’s flagship store in San Francisco serves as her home base, Berkeley still holds a special place in Kashish’s heart. She envisions a stronger future presence in Berkeley, by both nurturing student entrepreneurs and contemplating the idea of a Berkeley store. 

For now, AURA is open daily just five stops away from Berkeley via BART. Kashish welcomes young entrepreneurs seeking guidance, alongside anyone with a craving for guilt-free sweet treats.

The post Bootcamp to Business Dream: SCET Alum returns to Berkeley as CPG Founder/CEO appeared first on UC Berkeley Sutardja Center.

]]>
Nobel Peace Prize Recipient Discusses War Crimes in the AI Era https://scet.berkeley.edu/nobel-peace-prize-recipient-discusses-war-crimes-in-the-ai-era/ Mon, 13 May 2024 17:26:01 +0000 https://scet.berkeley.edu/?p=26086 Speakers on the panel discuss "Tracking War Crimes in the AI Era." From left to right: Gigi Wang, Oleksandra Matviichuk and Alexa KoenigThe Sutardja Center for Entrepreneurship and Technology welcomed Oleksandra Matviichuk, recipient of the 2022 Nobel Peace Prize, to discuss the role of artificial intelligence in war crimes in Ukraine, the global implications of disinformation, and human rights. On April 16, 2024, The Sutardja Center for Entrepreneurship and Technology (SCET) hosted an event titled “Tracking War…

The post Nobel Peace Prize Recipient Discusses War Crimes in the AI Era appeared first on UC Berkeley Sutardja Center.

]]>

The Sutardja Center for Entrepreneurship and Technology welcomed Oleksandra Matviichuk, recipient of the 2022 Nobel Peace Prize, to discuss the role of artificial intelligence in war crimes in Ukraine, the global implications of disinformation, and human rights.


On April 16, 2024, The Sutardja Center for Entrepreneurship and Technology (SCET) hosted an event titled “Tracking War Crimes in the AI Era. The Race to Record History and Keep it Intact.” Gigi Wang, Industry Fellow and Faculty at SCET, moderated an insightful panel discussion with esteemed panelists Oleksandra Matviichuk, leader of the Centre for Civil Liberties and 2022 Nobel Peace Prize Recipient, Alexa Koenig, adjunct professor at UC Berkeley School of Law and co-faculty director of the Human Rights Center Investigations Lab, and Gauthier Vasseur, executive director of the Fisher Center for Business Analytics at the Haas School of Business. 

Tracking War Crimes in the Age of AI

“We find ourselves in a digital world polluted with lies.”

Oleksandra Matviichuk 

Artificial Intelligence has profoundly transformed the legal landscape of justice and accountability in the context of war crimes. Until now, victims of war crimes have fought in vain for justice as perpetrators evade prosecution. Now, what previously required expensive tools and extensive coding has been made accessible through a simple natural language query – this revolutionary advancement will allow for the collection of war crime data at a much larger scale, enhancing its accessibility and reliability. With these powerful digital technologies at our fingertips, leaders are better equipped to fight for justice on behalf of victims at the individual level. However, such an operation will require the implementation of global infrastructure and careful regulation, as well as measures to address the psychosocial toll of graphic content. 

The four panelists face an audience of a few dozen people listening attentively to the discussion on the impact of AI on war crimes in Ukraine.
Gigi Wang, Alexa Koenig, Oleksandra Matviichuk and Gauthier Vasseur discuss the impact of AI on war crimes and human rights.

AI is a double-edged sword – the power of these technologies, while offering revolutionary solutions to age-old problems, has also opened doors to uncharted, treacherous digital territory. In particular, deepfakes – computer-generated images generally created with malevolent intentions – have undermined the integrity of information during war. Audio deepfakes pose an especially hazardous threat, as there are fewer points of verification than in images. Moreover, the speed at which social media algorithms disseminate disinformation has debilitating geopolitical consequences. By the time disinformation is debunked, it is already too late. The prevalence of “weapons of mass disinformation” threatens the trustworthiness of the facts from two sides: not only do deepfakes perpetuate violence and distrust, but they undermine legitimate content when people dismiss authentic content as fake.

Building Trust in Our Leaders and the Facts

“It’s time to take responsibility.” 

Oleksandra Matviichuk 

Alexa Koenig, who has researched how digital technologies affect human rights, describes a three-step verification process: examination of the technical data, content and contextual analysis, and source analysis. However, building trust is critical to dispelling disinformation – proving factual legitimacy with advanced verification methods does not mean that people will abandon a false narrative. Once beliefs are cemented, it can be difficult to convince people to change their minds. Evocative content can often bypass logical reasoning, leading to confirmation bias, amplified by social media algorithms.

Koenig noted, “Trust is a relationship.” Especially in times of crisis, our institutions and politicians must be trustworthy; a lack of trust will undermine leaders’ abilities to have authority and resolve division. 

Furthermore, media literacy is paramount in a digital world plagued by distrust. Social media sites in particular are hotbeds of disinformation and hate. Individuals, especially members of younger generations, must recognize the ramifications of engaging with deceptive content. Empowering individuals to conduct investigations into the media they consume is vital to halting disinformation in its tracks – if we are not cognizant of the consequences of our actions, we too are complicit in the propagation of these attacks on truth. As Gauthier Vasseur, executive director of the Fisher Center for Business Analytics, put it, “Let’s stop feeding the beast.” 

However, not all the responsibility can be placed on the individual, as Alexa Koenig points out. She notes that these changes necessitate broader cultural shifts and structural interventions required to reinforce these shifts at the legal level. At the institutional level, policymakers must advocate for legislation promoting transparency and accountability, and institutions must increase support for research initiatives exploring the ethical implications of AI. Corporations also have a responsibility to establish social norms that curb the spread of disinformation. Short-term profits are never worth unleashing long-term catastrophes. 

The four panelists pose for a photo together after the discussion on war crimes in the AI Era
From left to right: Alexa Koenig, Oleksandra Matviichuk, Gauthier Vasseur (photo by Vicky Liu/Berkeley SCET)

Where Innovation and Collaboration Come Together

“We have a historical responsibility for each person affected by this war.”

Oleksandra Matviichuk 

The implications of cybersecurity threats and widespread disinformation reach far beyond Ukraine’s borders–the war in Ukraine is not simply “Ukraine’s problem” but an international issue that represents a broader fight for justice. In the words of Oleksandra Matviichuk, Ukraine is engaged in a “fight for freedom in all senses” – the freedom to preserve the Ukrainian identity, the freedom to uphold democratic choice, and the freedom to live in a society in which rights are protected. 

The post Nobel Peace Prize Recipient Discusses War Crimes in the AI Era appeared first on UC Berkeley Sutardja Center.

]]>
Announcing the winners for Collider Cup XIV! https://scet.berkeley.edu/announcing-the-winners-for-collider-cup-xiv/ Thu, 09 May 2024 22:05:09 +0000 https://scet.berkeley.edu/?p=26167 Collider Cup teams, faculty, staff, and judges posing on stage after the Collider Cup eventThe Sutardja Center for Entrepreneurship & Technology (SCET) at UC Berkeley hosted Collider Cup XIV on May 6, 2024. The biannual competition showcased innovative student projects, from AI neurotech to cybersecurity, developed through SCET courses. 1st place and At-Large Bid winner – Playvision Using technology to expand the effectiveness of sports training, Playvision’s Cup-winning innovation…

The post Announcing the winners for Collider Cup XIV! appeared first on UC Berkeley Sutardja Center.

]]>

The Sutardja Center for Entrepreneurship & Technology (SCET) at UC Berkeley hosted Collider Cup XIV on May 6, 2024. The biannual competition showcased innovative student projects, from AI neurotech to cybersecurity, developed through SCET courses.

Team Playvision posing with Collider Cup trophy
Team Playvision (Photo by Adam Lau/Berkeley Engineering)

1st place and At-Large Bid winner – Playvision

Using technology to expand the effectiveness of sports training, Playvision’s Cup-winning innovation uses computer vision to analyze and tag football sports plays, making the game analysis process more efficient for coaching staff to focus on other aspects of game preparation. With their technology, teams receive instant, detailed insights on plays, giving their technique a competitive edge. Playvision’s victory is all the more exciting because they were the At-Large Bid winner, securing the wildcard seat, which gave them the opportunity to present at Collider Cup.

DSCF5905
Sehej Bindra presenting SimpleCell (Photo by Adam Lau/Berkeley Engineering)

2nd place – SimpleCell

A major obstacle in the bioinformatics field is the time it takes to analyze an experiment. Not only are existing platforms like ChatGPT unintuitive due to their black-box format, they also fail 60% of the time. SimpleCell eliminates this burden through their conversational, natural-language system that enables LLMs to work with other platforms, fitting into bioinformatic workflows.

DSCF5889
Aqua AI presenting on stage (Photo by Adam Lau/Berkeley Engineering)

3rd place – Aqua AI

Aqua AI is a team made up of competitive Cal swimmers who understand that the sport requires finely tuned coaching for success. Their product uses AI to give targeted and insightful feedback using swim videos, to analyze everything from stroke count to tempo. Its specific technique feedback allows swimmers to see rapid improvement in their style, something their coaches are unable to do with the number of athletes they coach. The team is currently training their model using strokes from Cal’s top swim team to give them a competitive advantage with data from the world’s top swimmers.

20240503 ColliderCup AVL 0497
Natalia Shamoon presenting Cal Milk (Photo by Adam Lau/Berkeley Engineering)

Most Innovative and People’s Choice – Cal Milk

Aiming to design an effective plant-based food solution, Most Innovative and People’s Choice winner, Cal Milk, produces a vital milk protein called Lactoferrin through precision fermentation with microalgae – no cows required. This safe and efficient method of production has picked up traction at local cafes due to its cost-effectiveness.

Alumni Expo Winner – Optigenix

This year, SCET introduced the Alumni Expo, inviting any student who has taken an SCET course, and therefore are now SCET alumni, back to competitively pitch their solutions before the event. Participants voted Optigenix as the winning team.

20240503 ColliderCup AVL 0889
Jai Williams and Gabe Abbes from Optogenix (Photo by Adam Lau/Berkeley Engineering)

Optigenix

Optigenix was founded by two Cal athletes, Cal high jumper Jai Williams (Business Administration ’23) and cross country runner Gabe Abbes (Business Administration ’24), who realized they were taking the exact same supplements despite playing different sports and having different injuries. High-intensity sports make every athlete’s journey with health and injuries unique. Optigenix offers tailored analysis and health recommendations for future athletes, using blood and genetic testing to provide individualized supplement packages.

Collider Cup XIV presenters

While not every team can win the Collider Cup, these teams also presented innovative solutions that have the potential to solve big problems for consumers.

Dart

DeepSafe

AlkeLink

Re:Dish

Homemore

Collider Cup features one venture project from each of its courses every semester. While teams chosen for the Collider Cup were chosen for a reason and certainly are some of the most visionary, SCET courses produce over one hundred innovative projects every semester.

SCET Teaching Awards

At SCET, experiential and hands-on learning is the cornerstone, led by our team of seasoned instructors, who provide expert feedback to help students develop their ideas. In addition to faculty, SCET hires course assistants to run the classes and ensure student innovators have the best environment to learn. In addition to acknowledging our entire teaching staff, SCET gave special recognition to an instructor and course coordinator at Collider Cup XIII.

Best Instructor Award

This semester’s SCET Best Instructor Award went to Anne Cocquyt, instructor for the Changemaker course ENGIN 183 Deplastify the Planet: How to Master the Sustainable Transition.

Anne Cocquyt speaks onstage after receiving the Best Instructor Award.
Anne Cocquyt, winner of the SCET Best Instructor Award for Deplastify the Planet (Photo by Adam Lau/Berkeley Engineering).

“Twenty years ago I swore I would not step into the footsteps of my grandparents, parents, and both of my sisters, who are all teachers,” Cocquyt said. “And here I am today. I wanted to business and startups and now I have the chance to do both!”

Cocquyt thanked SCET for giving her the opportunity to hone teaching skills and the founders of Deplastify the Planet for honoring “such an important topic for our planet.”

Best Course Coordinator Award

This semester’s SCET Best Course Coordinator Award went to Kelly Chou, also from Deplastify the Planet.

Kelly Chou holds up a hand in celebration while walking to stage to accept her Best Course Coordinator Award.
Kelly Chou, winner of the Best Course Coordinator Award for Deplastify the Planet (Photo by Adam Lau/Berkeley Engineering).

“Kelly put students first always! I’ve never had a course coordinator take our feedback so seriously. Her slides and her teaching styles are always easy to follow – there [is] never information given where I didn’t think it was relevant to the course. I constantly feel heard by Kelly,” a student said.

Fall Course Preview

During the deliberation period, SCET took the opportunity to showcase its upcoming Fall 2024 course offerings, which are currently open for enrollment for Berkeley students:


Judges Jay Onda, Stacey King, and Sandy Diao sit in the front row enjoying ice cream. Stacey speaks into a microphone.
SCET Collider Cup XIV judges; from left to right: Jay Onda, Stacey King, Sandy Diao. (Photo by Adam Lau/Berkeley Engineering)

Judges & Host

A special acknowledgment to our esteemed judges Jay Onda from Marubeni Ventures Inc., Berkeley alumna Sandy Diao from Descript, and Stacey King from Cal Innovation Fund for their fantastic feedback to students at the event. 

Rachel Eizler gestures while presenting on the Collider Cup stage with trophies on a table in the background.
Rachel Eizner (pictured) and Benecia Jude Jose served as student emcees for the event (Photo by Adam Lau/Berkeley Engineering)

Also, we want to recognize our student emcees, Benecia Jude and Rachel Eizner, for their amazing work to keep the audience engaged and entertained! 

That’s a wrap! Cheers to everyone’s efforts on making Collider Cup XIV a wonderful success.

The post Announcing the winners for Collider Cup XIV! appeared first on UC Berkeley Sutardja Center.

]]>
Why Hallucinations Matter: Misinformation, Brand Safety and Cybersecurity in the Age of Generative AI https://scet.berkeley.edu/why-hallucinations-matter-misinformation-brand-safety-and-cybersecurity-in-the-age-ofgenerative-ai/ Thu, 02 May 2024 22:51:19 +0000 https://scet.berkeley.edu/?p=26135 Surreal digital image of a pink elephant composed of neon lines and glowing dots, standing in a technological landscape with circuit patterns and data streams. The background is dark, highlighting the luminous and futuristic appearance of the elephant.“There are three kinds of lies: lies, damned lies, and statistics.” – Mark Twain (maybe) AI Generates Pink Elephant In the present day, Mark Twain’s (or Benjamin Disraeli’s?) supposed quote might better be recast as, “There are three kinds of lies: lies, damned lies, and hallucinations”.  In our age of generative AI, the technology’s propensity…

The post Why Hallucinations Matter: Misinformation, Brand Safety and Cybersecurity in the Age of Generative AI appeared first on UC Berkeley Sutardja Center.

]]>

“There are three kinds of lies: lies, damned lies, and statistics.” – Mark Twain (maybe)

AI Generates Pink Elephant

In the present day, Mark Twain’s (or Benjamin Disraeli’s?) supposed quote might better be recast as, “There are three kinds of lies: lies, damned lies, and hallucinations”.  In our age of generative AI, the technology’s propensity to create false, unrelated, “hallucinated” content may be its greatest weakness.  Major brands have repeatedly fallen victim to hallucination or adversarial prompting, resulting in both lost brand value and lost company value.  Notable examples include the chatbot for the delivery firm, DPD, aspersing the company; Air Canada having been found financially liable by the Canadian courts for real-time statements made by its chatbot; inappropriate image generation issues at Midjourney and Microsoft; and of course, Google losing $100 billion in market value in a single day following a factual error made by its Bard chatbot.  In each instance, brand value that was carefully accreted in the age of static content did not prove resilient to hallucination from the age of generative AI content.

Hallucinations in AI are vitally important, particularly as we enter a world where AI-enabled agents become ubiquitous in our personal and professional lives.  Humans communicate principally via language, through both sight and sound, and our latest AI breakthroughs in large language models (LLMs) portend a quantum jump in how we’ll evermore communicate with our computers.  Brain-computer interfaces notwithstanding, conversational language with agents will be our ultimate interface to AI as collaborator.  In such a scenario, with trust paramount, we cannot afford the risk of hallucinated AI output.  But hallucinations and our agent-based future are now firmly on a collision course.

In addition, in days gone by, a devious hacker might have targeted corporate information systems through deep knowledge of programming languages, SQL, and the attack surfaces within technology stacks.  Today, thanks to the ubiquity of natural language interfaces to the fabric of computing, a devious hacker can attack a brand by being proficient in just a single area of technology: the ability to communicate via natural language.

So, what’s a brand owner to do as these risks continue to multiply?  Companies such as Microsoft and Blackbird AI have started to address some of the challenges in generated content, but as an industry we’ve just begun to scratch the surface.  Happily, there are a range of technologies being developed to help reduce hallucination and increase factuality.  It’s imperative that all of us have a solid grasp of these solutions and the underlying problems they address.  The downside risks in AI hallucination are profound and equally impact individuals, businesses and society.

Why Does Generative AI Hallucinate?

We’ve been writing computer software for 80 years and we still produce bugs in our source code, leading to execution errors.  It should come as no surprise to us that, as we find ourselves engulfed by data-driven technologies such as AI, we can find “bugs” within data’s complexity and volume, leading to AI hallucinations.  The etiology of AI hallucination includes biased training data, the computational complexity inherent in deep neural networks, lack of contextual / domain understanding, adversarial attack, training on synthetic data (“model collapse”), and a failure to generalize the training data (“overfitting”).

The simple model for classifying hallucinations is that they’re either of the factuality variety or of the faithfulness variety (Huang et al. 2023).  As defined in Huang at al.’s survey article, “Factuality hallucination emphasizes the discrepancy between generated content and verifiable real-world facts, typically manifesting as factual inconsistency or fabrication”, while “faithfulness hallucination refers to the divergence of generated content from user instructions or the context provided by the input, as well as self-consistency within generated content”.  More simply, factual hallucinations get output facts wrong, while faithfulness hallucinations are unexpected (bizarre) outputs.

AI’s Anti-hallucinogens

Addressing the causes of AI hallucination or inaccuracy has typically involved improving the quality of prompts as well as better tuning the LLM model.  Too, there are emerging techniques for scoring LLM outputs for factuality.  The number of technologies being developed across these categories to capture factuality and faithfulness is large and increasing.    A taxonomy pipeline for the different types of mitigations might be as follows:

A diagram of a process

Description automatically generated

Before we discuss threats to brand safety and cybersecurity, let’s take a look at a few of the more prominent AI anti-hallucinogens.  This list is not definitive and is meant merely to give a flavoring for the types of solutions now available.  Brand owners today, as much as technologists, will need a familiarity with all current approaches.

Prompt Engineering

Prompt engineering involves crafting specific prompts or instructions that guide the LLM towards generating more factual and relevant text. Providing additional context or examples within the prompt can significantly reduce hallucinations.

Prompt engineering may be our most accessible means of hallucination mitigation, but the familiar manual approach is neither scalable nor optimal.  The future of prompt engineering is programmatic.  LLMs are models, after all, and models are best manipulated not by human intuition but by bot-on-bot warfare featuring coldly calculating algorithms that score and self-optimize their prompts.  Notable work in automated prompt engineering has been done in Google DeepMind’s OPRO (Yang et al. 2023), the aptly named Automated Prompt Engineer (APE; Zhou et al. 2023), DSPy (Khattab et al. 2023), VMware’s automatic prompt optimizer (Battle & Gollapudi 2024), and Intel’s NeuroPrompts (Rosenman et al. 2023).

Retrieval-Augmented Generation (RAG)

Retrieval-augmented generation (Lewis et al. 2020) has been a tremendous tool supporting the injection of context, in real-time, into LLM prompts, thereby improving the fidelity of generated output and the reduction of hallucination (Shuster et al. 2021).

RAG works by augmenting prompts with information retrieved from a knowledge store, for example a vector database.  In the diagram below, the elements of a prompt are mapped into a vector embedding.  That vector embedding is then used to find up-to-date content – indexed via a vector embedding also – that is most similar to what’s being asked for in the prompt.  This additional context augments the original prompt, which is then fed into the LLM, producing a fully-contextualized response.

A diagram of a software processing process

Description automatically generated

RAG has shown itself to be a powerful tool.  It’s not dependent on the LLM’s internal training-time parameters, but instead boosts the original prompt with context retrieved in real-time from a knowledge store.  Because no LLM retraining is required, RAG is a more resource-efficient solution than traditional fine-tuning.  A poster-child application of RAG in a consumer setting was provided by BMW at the recent CES.

Fine-tuning

Fine-tuning an LLM on specific datasets focused on factual tasks (Tian et al. 2023) can significantly improve its ability to distinguish real information from fabricated content.  LLM fine-tuning is the process of adapting a pre-trained language model to perform a specific task or align within a specific domain. It involves training the model on a task-specific, labeled data set, and adjusting its parameters to optimize performance for the targeted task. Fine-tuning allows a model to learn domain-specific patterns and nuances, enhancing its ability to generate relevant and accurate outputs. 

Fine-tuning has its limitations, however.  Its effectiveness depends on the quality and representativeness of its training data and the selection of its hyperparameters during the fine-tuning process.  As fine-tuning is done prior to runtime, it also suffers from lack of access to up-to-date information (in contrast to RAG, for example).  Finally, fine-tuning is computationally expensive and, depending on the complexity of the task, may bring with it a need for a significant amount of additional data.

Low-Rank Adaptation (LoRA)

Low-Rank Adaptation (Hu et al. 2021) is a technique for fine-tuning LLMs that focuses on efficiency and reducing unrealistic outputs.  It works by introducing low-rank matrices, which are essentially minimized versions of the original data, into the model.  This significantly reduces the number of parameters the model needs to learn, making training faster and requiring less memory.

While LoRA doesn’t directly address the root cause of hallucinations in LLMs, which stem from the biases and inconsistencies in training data, it indirectly helps by enabling more targeted fine-tuning.  By requiring fewer parameters, LoRA allows for more efficient training on specific tasks, leading to outputs that are more grounded in factual information and less prone to hallucinations.  This synergy between efficiency and adaptation makes LoRA an effective tool in producing high-fidelity LLM output.

Confidence Scores

Some hallucination mitigation techniques assign confidence scores to the LLM’s outputs (Varshney et al. 2023). These scores indicate how certain the model is about its generated text. It’s then possible to bowdlerize outputs in favor of high confidence scores, reducing the likelihood of encountering hallucinations.  Notable work has been done here in SelfCheckGPT (Manakul et al. 2023).  SelfCheckGPT detects hallucinations via sampling for factual consistency.  If an LLM generates similar responses when sampled multiple times, its response is likely factual.  Though the model is shown to perform well, the practicality of performing real-time consistency checks across multiple samples, in a scalable fashion, may be limited.

Yet another approach to factuality was provided via Google’s Search-Augmented Factuality Evaluator (SAFE; Wei et al. 2024).  In a sort of reverse-RAG, “SAFE utilizes an LLM to break down a long-form response into a set of individual facts and to evaluate the accuracy of each fact using a multi-step reasoning process comprising sending search queries to Google Search and determining whether a fact is supported by the search results”.

Knowledge Graphs

Knowledge graphs can act as anchors for LLMs, reducing the risk of hallucination by providing  a foundation of factual information.  Knowledge graphs are structured databases that explicitly encode real-world entities and relationships, and connecting knowledge graphs to LLMs can provide factual utility at all stages of the generative pipeline: from providing context into prompts, during training / fine-tuning, and also in testing for factual accuracy of the prompt’s response (Guan et al. 2023).    This helps ground the LLM’s responses in reality, making it less prone to hallucinations.

Knowledge graphs (KG) employed to reduce hallucinations in LLMs at different stages.  From Agrawal et al. 2023

It’s key to note that knowledge graphs encode both facts and context.  Entities within the knowledge graph are linked to each other, showing relationships and dependencies.  This allows the LLM to understand how different concepts interact.  When generating text, the LLM can then draw on this contextual information to ensure consistency and avoid nonsensical outputs.  For example, if an LLM is prompted about an historical event, it can utilize the knowledge graph to ensure that output regarding people, places, dates and events conform with historical fact.

Garbage In, Garbage Out

“It is the habit of mankind to entrust to careless hope what they long for, and to use sovereign reason to thrust aside what they do not desire.”

Thucydides

Hallucination is inevitable.  So says the title of a recent paper (Xu et al. 2024), with the researchers finding that “it is impossible to eliminate hallucination in LLMs”, specifically because “LLMs cannot learn all of the computable functions and will therefore always hallucinate”.  Though we might be able to reduce the rate of hallucination through innovation, evidence shows that we will never be able to eliminate it.

Further, and dismayingly, AI’s anti-hallucinogens prove to be not just a cure for fallacy but also a steroid to enhance it.  The very same techniques we might use to reduce hallucination – RAG, fine-tuning, knowledge graphs – are dependent on data, data that can easily be biased to reinforce specific views; one person’s truth is after all another’s hallucination.  Simply, injecting context for “truth” in an LLM requires (the ever elusive) 100% objective data.    

This brings us back to Thucydides’ quote of 2400 years ago, and history’s first recorded observation of the insuperable (and, apparently, eternal) human condition of confirmation bias.  Human beings have always been particularly vulnerable to holding their own beliefs, seeking out confirmations of those beliefs, and disregarding all others.  In our technology campaign for ridding generative AI of hallucinations, we’ve created tools that will help in doing just that and will equally enable the practitioners of disinformation: garbage in (toxic content in knowledge graphs, content databases, or fine-tuning), garbage out (in generative AI).  As we solve one problem, we inadvertently feed another.  

Recent work by Anthropic (Durmus et al. 2024) and elsewhere (Salvi et al. 2024) has produced somewhat dispiriting results, from a societal perspective, on the persuasiveness of language models, once again highlighting the critical need for AI’s factuality and faithfulness in an agent-based world.  Anthropic’s work on the LLM-driven persuasion of humans found “a clear scaling trend across [AI] model generations: each successive model generation is rated to be more persuasive than the previous” and also “that our latest and most capable model, Claude 3 Opus, produces arguments that don’t statistically differ in their persuasiveness compared to arguments written by humans” (emphasis added).

The Salvi work on LLM persuasiveness went a step further, testing the impact of giving the LLM access to basic sociodemographic information about their human opponent in the persuasion exercise.  The study found “that participants who debated GPT-4 with access to their personal information had 81.7% (p < 0.01; N=820 unique participants) higher odds of increased agreement with their opponents compared to participants who debated humans.”  The study’s authors concluded that “concerns around personalization are meaningful and have important implications for the governance of social media and the design of new online environments”.  We can absolutely anticipate that malevolent actors will use the LLM technologies now at hand to instrument conversational agent responses with all manner of multi-modal sociodemographic data in order to maximize subjective persuasion.

Agents + Hallucination = Titanic + Iceberg

“You won’t have to use different apps for different tasks. You’ll simply tell your device, in everyday language, what you want to do. … This type of software—something that responds to natural language and can accomplish many different tasks based on its knowledge of the user—is called an agent. … Agents are not only going to change how everyone interacts with computers. They’re also going to upend the software industry, bringing about the biggest revolution in computing since we went from typing commands to tapping on icons.”Bill Gates, 2023

As Bill Gates, NVIDIA and others have rightly noted, we’re now passing into a new age of computation, one based on agents driven by conversational input/output and AI.  If unified chatbot agents become our principal means of interfacing with computers, and if those agents (aka. “super-spreaders”?) are subject to all of the vulnerabilities of LLM-based generative AI – hallucination, misinformation, adversarial attack (Cohen et al. 2024), bias (Haim et al. 2024, Hofmann et al. 2024, Durmus et al. 2023) – then we’ll all suffer individually, and also collectively as businesses and societies.

Just as LLMs have shown themselves responsive to automated prompt engineering to yield desired results, so too have they shown themselves susceptible to adversarial attack via prompt engineering to yield malicious results (Yao et al. 2023, Deng et al. 2023, Jiang et al. 2024, Anil et al. 2024, Wei et al. 2023, Rao et al. 2024).  Exemplary work in using an automated framework to jail-break text-to-image gen-AI, resulting in the production of not-suitable-for-work (NSFW) images, was done in the SneakyPrompt project (Yang et al. 2023).  “Given a prompt that is blocked by a safety filter, SneakyPrompt repeatedly queries the text-to-image generative model and strategically perturbs tokens in the prompt based on [reinforcement learning and] the query results to bypass the safety filter.”

Beyond chatbots, the risks of hallucination extend into many other application areas of AI.  LLMs have slipped into the technologies utilized to build autonomous robots (Zeng et al. 2023, Wang et al. 2024) and vehicles (Wen et al. 2024).  Hallucinations have also shown themselves to be a significant issue in the field of healthcare, both in LLM-driven applications (Busch et al. 2024, Ahmad et al. 2023, Bruno et al. 2023) and in medical imaging (Bhadra 2021).  Data as attack vector has been shown in self-driving vehicle technology, through “poltergeist” (Ji et al. 2021) and “phantom” (Nassi et al. 2020) attacks, and has also been demonstrated in inaudible voice command “dolphin” attacks (Zhang et al. 2017).

More unsettling still is LLM-based agents being “integrated into high-stakes military and diplomatic decision making”, as highlighted by Stanford’s center for Human-Centered Artificial Intelligence (Rivera et al. 2024).  Here, the researchers found “that LLMs exhibit difficult-do-predict, escalatory behavior, which underscores the importance of understanding when, how, and why LLMs may fail in these high-stakes contexts”.  In such settings, the risk hallucination may bring in escalating human conflict is clearly an unacceptable one.

From Hallucinations to “Aligned Intelligence”

Language is the human species’ chief means of communication and is conveyed via both sight and sound.  The impact of LLM AI technology will remain powerful specifically because it maps so effectively the language foundation of human communication.  Humans also sense and perceive the non-language parts of our world, and we do this overwhelmingly through our sense of vision.  A burgeoning field within AI is consequently – and unsurprisingly – the large vision model (LVM).  Are LVMs also susceptible to hallucination, like their LLM brethren?  Yes.

Similar to training LLMs on massive language data sets, large vision models are trained on massive image data sets, yielding an ability for computer vision systems to understand the content and semantics of image data.  As vision-based AI systems become increasingly ubiquitous in applications such as autonomous driving (Wen et al. 2023), the same issues we see in LLMs of hallucination and inaccurate results will appear in LVMs.  LVMs are also responsive to prompt engineering (Wang et al. 2023), and also susceptible to hallucination (Liu et al. 2024, Li et al. 2023, Wang et al. 2024, Gunjal et al. 2023).  The difference between a hallucinating LLM and hallucinating LVM may be that the latter has a better chance of actually killing you.

Finally, our current efforts to ensure alignment (Ji et al. 2023) by AI-driven agents with humans’ goals (in LLMs, for factuality and faithfulness) is part of a much broader narrative.  AI continues to evolve in the direction of long-term planning agents (LTPAs).  That is, autonomous AI agents that are able to go beyond mere transformer-driven token generation to instead plan and execute complex actions across very long time horizons.  The nuanced, longitudinal nature of LTPAs will make it exceedingly difficult to map the faithfulness / alignment of such models, exposing humans (and the planet) to unmapped future risks.

It is for this reason that the recent article “Regulating advanced artificial agents” by Yoshua Bengio, Stuart Russell and others (Science: Cohen et al. 2024) warned that “securing the ongoing receipt of maximal rewards with very high probability would require the agent to achieve extensive control over its environment, which could have catastrophic consequences”.  The authors conclude, “Developers should not be permitted to build sufficiently capable LTPAs, and the resources required to build them should be subject to stringent controls”.  We might view our current efforts to contain hallucination in LLM-driven agents as merely the first crucial skirmish in what will be a far more difficult struggle in our “loss of control” with LTPAs.

Where Do We Go from Here?

“Picture living in a world where deepfakes are indistinguishable from reality, where synthetic identities orchestrate malicious campaigns, and where targeted misinformation or scams are crafted with unparalleled precision”

Ferrara 2024

Gen-AI is still in its nascence, but the damage that can accrue from the technology’s hallucination and inaccuracy is already manifold: it impacts us as individuals, it impacts AI-based systems and the companies and brands that rely on them, it impacts nations and societies.  And as modern LLM-driven agents become increasingly prevalent in all aspects of daily life, the issue of hallucination and inaccuracy becomes ever more crucial.

Hallucination will remain important as long as humans communicate using words.  From brand safety, to cybersecurity, to next-gen personal agents and LTPAs, factuality and faithfulness is everyone’s problem.  Inadequately addressed, we risk building a generation’s worth of technology atop a foundation that is deeply vulnerable.  Inevitably, we might find ourselves in a forever war, with weaponized AI agents – “AI-based psychological manipulation at internet scales within the information domain” (Feldman et al. 2024) – competing with innovation in the mitigation of hallucination and amplified misinformation.  This conflict will be continuous and will forever require from us more robust and interpretable AI models, a diversity of training data, and safeguards against adversarial attack.  Brand safety, system safety and most importantly, individual and societal safety, all hang in the balance.  

We may now be finding that the AI “ghost in the machine” that we all should fear is not sentience, but simple hallucination.  As Sophocles almost said, “Whom the gods would destroy, they first make hallucinate”.

The post Why Hallucinations Matter: Misinformation, Brand Safety and Cybersecurity in the Age of Generative AI appeared first on UC Berkeley Sutardja Center.

]]>
First-Ever Study of AAPI Representation Among VC Investors Finds Persistent Underrepresentation https://scet.berkeley.edu/first-ever-study-of-aapi-representation-among-vc-investors-finds-persistent-underrepresentation/ Wed, 17 Apr 2024 16:25:34 +0000 https://scet.berkeley.edu/?p=26018 State of AAPI in Venture CapitalDespite a perception of inclusion, new data from DECODE, UC Berkeley’s SCET, and AAAIM show that AAPIs face an imbalanced reality.  In the fields of investments and venture capital, the narrative often revolves around a world full of innovation and opportunity. However, beneath the surface lies a glaring reality that challenges these notions of potential.…

The post First-Ever Study of AAPI Representation Among VC Investors Finds Persistent Underrepresentation appeared first on UC Berkeley Sutardja Center.

]]>

Despite a perception of inclusion, new data from DECODE, UC Berkeley’s SCET, and AAAIM show that AAPIs face an imbalanced reality. 

In the fields of investments and venture capital, the narrative often revolves around a world full of innovation and opportunity. However, beneath the surface lies a glaring reality that challenges these notions of potential. In a first-of-its-kind study, research from DECODE, UC Berkeley’s Sutardja Center for Entrepreneurship & Technology (SCET), and AAAIM now sheds light on a concerning issue: the significant lack of representation of Asian American and Pacific Islander (AAPI) in funding opportunities and career advancement within the VC investment community. 

This might come as a surprise to some people. There are a handful of highly-visible success stories at the top, and the Forbes Midas list continually ranks several AAPI investors among the top 10. Furthermore, AAPIs indeed comprise an important workforce throughout the tech industry and venture capital. Compounding the surprise-factor is that the combined term “AAPI” is incredibly broad. AAPI includes people with origins among more than 50 distinct ethnic groups, hundreds of languages, and maps to wide geographic regions including the Far East, Southeast Asia, the Indian subcontinent, and the Pacific Islands. The cultures, histories, and lived experiences of AAPIs is vast beyond comprehension. These factors all reinforce the  prevailing assumption that there’s no shortage of representation of AAPIs in technology and entrepreneurship. 

However, our recent research quantifies for the first time that – despite these perceptions of success and inclusion – AAPIs face a pervasive bias within the VC community. There have been a number of studies that looked at the systemic imbalance faced by women and other diverse communities but, until now, there’s been limited research related to the AAPI community in VC. Our research also quantifies that AAPI-owned VCs continue to face proportionally low AUM numbers and unique challenges in fundraising.

Here are some of the most surprising revelations from our research. 

  1. Only 3.3% of VC funds are AAPI owned. This low, single-digit percentage of representation is extremely lower than the common perception. Of these 3.3% AAPI-managed VC funds, it represents only 2.9% of total AUM. This mismatch makes even less sense when you look at the performance of AAPI-owned funds, which have a higher proportion of being top-performing funds: 52.6% of all AAPI-owned funds have been ranked in the top quartile for fund performance. 
  2. AAPIs are often left out of DEI initiatives by LPs. Among the top 100 limited partners (LPs) that allocate to venture, we found an alarming fact: 19% explicitly exclude AAPIs from these critical DEI initiatives. Clearly, the incorrect perception that AAPI professionals are doing well in VC is resulting in AAPIs being excluded. We found that only 9% specifically include AAPIs in DEI initiatives and goals. Furthermore, we found that 72% of diversity initiatives do not clearly state the inclusion of AAPIs in programming, even though other minority groups and women are mentioned as part of programming. This omission of AAPI could be interpreted that AAPIs are not included in much of this additional 72%.
  3. The path to promotion and becoming an investing partner takes 41% longer for AAPIs. Before rising to become an investment partner, AAPI professionals worked in junior roles for an average of 3 years, 10 months. Non-AAPI professionals were able to be promoted in 2 years, 9 months. This slower trajectory is even more notable when combined with our research finding that AAPIs are more likely to have additional work experience before joining a VC (coming from prior roles as operators, in finance, or in consulting). 
  4. More AAPIs with junior VC experience end up starting their own funds. Rather than waiting to be recognized inside their current firms, it appears that a difficult path to promotion could be leading more AAPIs with junior VC experience to start their own funds.  Proportionally more AAPI partners with junior VC experience started their own fund (16.6%) compared to their non-AAPI (13.7%) counterparts.

There’s a lot at stake due to the underrepresentation of AAPIs. 

  • Policies and practices are adopted that limit opportunities specifically for AAPIs. We found examples of unconscious exclusion of AAPIs from leadership programs. These kinds of patterns can put senior-level investment roles out of reach, despite AAPIs’ qualifications. Our interviews also revealed recurring themes where many fund managers said they face additional hurdles like stereotyping and unconscious bias from LPs. 
  • There is less mentorship for AAPIs. Another practice we repeatedly heard is that AAPIs are afforded fewer structured mentorship opportunities. These programs are in-place because they’re shown to accelerate advancement to leadership roles. With limited AAPI participation, many advancement opportunities are being denied. 
  • AAPI investors are poised to recognize innovations that non-AAPI investors may overlook. With AAPIs representing an incredibly diverse range of ethnicities and cultures, AAPI investors are well-positioned to recognize a wider range of entrepreneurial ideas. 
  • Despite proven track records of successful investments, the lack of AAPIs in investment partner roles means that future returns are capped. Previous AAAIM research quantified the excellent performance of AAPI-owned VC funds. In fact, 52.6% of these AAPI-owned funds delivered top-quartile performance, compared to 24.1% for non-AAPI funds of the same vintage year and strategy. 

For many AAPI individuals working in VC and investing, these research findings validate the gut feelings they’ve held for some time. One interviewee in the research commented on the experience of seeking a promotion through path: “The ‘Bamboo Ceiling’ effect has caused AAPI managers to lack leadership opportunities. People tend to promote others who look like themselves, so it is harder to stand out and be promoted since most people in the space are white.” Another interviewee underscored a belief that unfortunately continues to persist: “Asians are tagged as quiet, hardworking, and behind the scenes.” A third interviewee shared the insight that, “There are diversity pushes but capital is given to old managers, and not new managers who fall under the diversity initiative.”

As a result of quantifying these feelings for the first time, we see a number of actions that must be taken to improve DEI outcomes for AAPIs.

  1. We must raise awareness about how underrepresentation is experienced differently by AAPIs in the industry, and ensure that data about AAPIs is captured in DEI reporting. By backing up anecdotes and gut-feelings with hard data and showing the realities facing AAPIs in VC, we can challenge existing narratives and catalyze change. New levels of awareness will also lead to better policy-making. 
  2. We need better coalition-building across AAPI groups, and with other diverse communities. We’d like to foster a greater sense of common mission on DEI and create a unified voice with other industry organizations. We believe by working together and pushing for a more inclusive approach, we’ll yield better representation among VCs.  
  3. It’s critical to keep the spotlight on DEI efforts, for all communities. We are extremely disturbed and worried by current trends where DEI programs are being deemphasized and even eliminated. It would be the ultimate tragedy for these critical efforts — which are meant to increase visibility and participation among people of all backgrounds — to be discarded.  

It’s true that we’ve seen breakout success stories among the AAPI community such as Khosla Ventures, Venrock, and Intel Capital, and among a number of Silicon Valley’s earliest VCs such as Mayfield, Sierra, and Lightspeed also now include AAPI leaders. These accomplishments must be celebrated and recognized. However, the consistently low levels of AUM share for AAPI-owned VC firms and our myriad other findings indicate that our community is viewed largely as a part of the support-level investment workforce.

Diversity should not be just a buzzword, but an elevated practice that creates increased opportunities and a fresh look at how best to enhance the market. It’s critical to fostering progress and innovation, and this new research provides long-overdue clarity about the imbalances that AAPIs are facing. Homogenous leadership constrains the types of ideas that become supported with venture capital. In order for the big ideas of tomorrow to emerge and grow, the investors supporting those ideas must reflect the diversity of our communities, both in the United States and around the world. 

Read the full report

Overview of variety of data sources: 

  • 46 publications reviewed that talked about diversity in VC
  • 32 VC databases compiled 
  • 60+ fund managers interviewed 
  • 2,000+ fund manager profiles analyzed (primarily via LinkedIn) 
  • 700+ funds analyzed 
  • Incorporation of DEI into investment criteria among the top 100 LPs in the U.S.
  • $500 billion in VC AUM represented in the study 
  • Consolidated and consulted relevant data from a variety of organizations including: BLCK VC, NVCA, Harvard Business Review, Fairview Capital and Midas List, All Raise, EVCA 

The post First-Ever Study of AAPI Representation Among VC Investors Finds Persistent Underrepresentation appeared first on UC Berkeley Sutardja Center.

]]>
Join us for Collider Cup XIV – UC Berkeley’s Premier Technology Entrepreneurship Showcase https://scet.berkeley.edu/join-us-for-collider-cup-xiv-uc-berkeleys-premier-technology-entrepreneurship-showcase/ Wed, 03 Apr 2024 23:57:11 +0000 https://scet.berkeley.edu/?p=25960 InstagramColliderCup SP2024Join us at Collider Cup XIV, where innovative Berkeley minds converge for the university’s premier technology showcase and pitch competition. Event Details Register now! Collider Cup XIV is Berkeley SCET’s pitch competition and showcase of Berkeley’s top student venture teams from the Spring 2024 semester. This event is the culmination of a semester’s hard work,…

The post Join us for Collider Cup XIV – UC Berkeley’s Premier Technology Entrepreneurship Showcase appeared first on UC Berkeley Sutardja Center.

]]>

Join us at Collider Cup XIV, where innovative Berkeley minds converge for the university’s premier technology showcase and pitch competition.

Event Details

  • Friday, May 3, 2024
  • Live, in-person @ Banatao Auditorium
  • 2:30 p.m. – 6 p.m.

Register now!


Collider Cup XIV is Berkeley SCET’s pitch competition and showcase of Berkeley’s top student venture teams from the Spring 2024 semester. This event is the culmination of a semester’s hard work, where student teams pitch their ventures to win the sought-after Collider Cup. In addition to the competition, SCET will share insights about its upcoming courses and offer an opportunity for invaluable networking with free food after the event.

Throughout the semester, students from SCET’s Spring 2024 venture courses have honed their startup ideas. These courses, open to all majors, foster interdisciplinary teams focused on creating solutions for societal problems using cutting-edge technologies like AI, foodtech, and healthtech.

Learn more about the courses that have propelled these teams to the forefront:

For full course details, visit SCET Courses Page.

After the pitches, stick around for complimentary food and networking in the Kvamme Atrium to connect with student innovators, faculty, and investors and embrace the spirit of entrepreneurship at UC Berkeley!

Exciting New Addition: Alumni Expo

New for Spring 2024: Arrive early and experience the first-ever SCET Student Alumni Collider Cup Expo! An hour before Collider Cup XIV kicks off, SCET alumni will showcase their ventures, competing for an interview with Pear VC. As a participant, you’ll play a crucial role in selecting the winning team, so come early to cast your vote and secure your seat for Collider Cup XIV.

→ Alumni (who have taken at least one SCET course prior to Spring 2024) are eligible to apply to join the expo. Apply by April 17th for consideration.

Announcing Esteemed Emcee and Judges

The event will be emceed by Benecia Jude Jose, a third-year student in Public Health & Data Science, passionate about revolutionizing DEI in clinical trials and promoting health equity. She is a former Collider Cup presenter and founder of CliniCAL, a health-technology project recognized by SCET and a UC Berkeley Changemaker.

A panel of industry experts, including Jay Onda from Marubeni Ventures and Stacey King from the Social Innovation Fund, will be joined by Sandy Diao, a leading figure in growth at Descript. These judges bring a wealth of knowledge and experience to the table, providing students with invaluable feedback.

Benecia Jude, student emcee

Benecia Jude, student emcee

Jay Onda, Marubeni Ventures

Jay Onda, Marubeni Ventures

Stacey King, Cal Innovation Fund

Stacey King, Cal Innovation Fund

Sandy Diao, Descript

Sandy Diao, Descript

Places and Prizes

Collider Cup XIV will award 1st, 2nd, 3rd, Most Innovative, and People’s Choice awards to competing teams. We’re thrilled to announce the following prizes from our exceptional partners:

  • SkyDeck Pad-13 Prize: The top team will automatically be accepted into the SkyDeck Pad-13 Incubator program. The 2nd place team will also receive an interview with SkyDeck staff.
  • TechCrunch Fund Prize: The top 3 teams will join the Berkeley delegation and receive Founder Passes for TechCrunch Disrupt 2024!
  • Pear VC Prize: The 1st place team will receive a guaranteed interview for PearX, a 14-week bootcamp with 1-on-1 support from investment partners, the talent team, and go-to-market team.
  • SCET Prize: The winning teams will be invited to join a special networking training event where they will learn how to build connections from an expert consultant.

For more information on the partners and detailed descriptions of the prizes, visit their respective websites:

SkyDeck logo
TechCrunch logo
Pear VC logo
SCET logo

Ready to Witness Entrepreneurial Excellence?

Don’t miss the chance to see UC Berkeley’s brightest compete and connect. Join us for Collider Cup XIV – where tomorrow’s innovations take center stage today.

Register now to see what’s next in startups, tech, and innovation!

The post Join us for Collider Cup XIV – UC Berkeley’s Premier Technology Entrepreneurship Showcase appeared first on UC Berkeley Sutardja Center.

]]>
Pretotyping for SportsTech Innovation: A Practical Guide https://medium.com/@christynaserrano/pretotyping-for-sportstech-innovation-a-practical-guide-057a93adf4b0#new_tab Wed, 06 Mar 2024 01:11:13 +0000 https://scet.berkeley.edu/?p=25704 Pretotyping for Sports Innovation

The post Pretotyping for SportsTech Innovation: A Practical Guide appeared first on UC Berkeley Sutardja Center.

]]>

The post Pretotyping for SportsTech Innovation: A Practical Guide appeared first on UC Berkeley Sutardja Center.

]]>
SCET Alum and Berkeley Professor Launch Generation Lab for Personalized Anti-Aging https://scet.berkeley.edu/scet-alum-and-berkeley-professor-launch-generation-lab-for-personalized-anti-aging/ Tue, 30 Jan 2024 21:14:18 +0000 https://scet.berkeley.edu/?p=25362 Berkeley Bioengineering Prof. Irina Conboy, Michael Suswal and SCET Alum and former Collider Cup winner, Alina SuSCET Alum and former Collider Cup winner, Alina Su, and University of California, Berkeley Bioengineering Professor Irina Conboy are co-founding Generation Lab with the Mission to Extend the Human Healthspan – Waitlist for Its Clinically Driven At-Home Aging Test + Personalized Aging Intervention Opens Today Generation Lab will offer the first test based on biomedical evidence that measures…

The post SCET Alum and Berkeley Professor Launch Generation Lab for Personalized Anti-Aging appeared first on UC Berkeley Sutardja Center.

]]>

SCET Alum and former Collider Cup winner, Alina Su, and University of California, Berkeley Bioengineering Professor Irina Conboy are co-founding Generation Lab with the Mission to Extend the Human Healthspan – Waitlist for Its Clinically Driven At-Home Aging Test + Personalized Aging Intervention Opens Today

Generation Lab will offer the first test based on biomedical evidence that measures an individual’s biological age progression, officially launched today out of stealth. The company is announcing its pre-seed funding and the opening of its test kit waitlist.

Co-founded by Irina Conboy, a preeminent aging and longevity researcher from UC Berkeley, Generation Lab combines a simple cheek-swab test with proprietary techniques to definitively measure molecular disbalance that indicates aging and risk of disease. The company works with clinicians who can recommend personalized interventions, and Generation Lab can measure the effectiveness of these over time.

According to the CDC, six in ten adults in the US have at least one chronic disease and four in ten adults have two or more. Many of these are correlated with aging. These diseases are a major cause of morbidity and mortality and place a significant burden on our healthcare system. Catching these conditions early and preventing their progression can save millions of lives and billions of dollars. That’s what Generation Lab plans to do.

Unlike biological clocks, which are based on linear models that predict aging and disease by comparing a few people to each other, Generation Lab quantifies the actual biological age of each person through precise measurement of molecular biological noise – a key barometer of aging. Moreover, the biological age is quantified by Generation Lab for each function of the body: inflammation, regeneration, homeostasis, reproduction, etc.  Generation Lab’s solution is based on peer-reviewed science – including groundbreaking research led by Dr. Irina Conboy, UC Berkeley, and published as a cover story in Aging in Sept 2023.

“Certain molecular disbalance causes degenerative diseases that flare up with aging,” explained Dr. Conboy, co-founder of Generation Lab, and Professor at University of California at Berkeley. “Generation Lab’s tests enable us to identify when a person has a proclivity towards certain conditions and suggest approaches that can help to attenuate, delay or even eliminate those risks. This paves the way to novel anti-aging medicine for identifying and treating diseases early – even when a person is pre-symptomatic – which leads to better outcomes.”

George Church, Ph. D., the Robert Winthrop Professor of Genetics at Harvard Medical School, is a renowned geneticist who is serving as an advisor to Generation Lab. “Irina and her team are taking cutting edge research straight out of the lab and applying it to a pressing issue: identifying disease much earlier than ever before possible using genetic markers,” said Dr. Church. “Seeing this unfold, and having a front-row seat to its impact, is going to be a pivotal moment in epigenetics – it’s incredibly exciting.”

Here’s how Generation Lab works:

  • Obtain a test kit from a physician, clinic, or from Generation Lab directly.
  • Open the kit and scan the QR code, selecting your level of membership (entry-level subscriptions start at about $400 per year for three tests).
  • Follow the directions for a cheek swab and mail the sample back to Generation Lab in a pre-addressed envelope or return it to the clinician who provided the test.
  • Within a few weeks, Generation Lab will provide a personalized report highlighting the status of your specific molecular aging markers, and offering a virtual call with a physician to review clinical and lifestyle interventions you might consider to slow the aging process or improve health.
  • After subsequent Generation Lab tests, your clinician can review how effective each intervention was and its impact on your health and rate of aging.

Generation Lab’s other two co-founders are Alina Rui Su and Michael Suswal. Su, who studied longevity at Harvard Medical School and Berkeley, was previously founder and CEO of NovaXS. She serves as Generation Lab’s CEO, setting the vision and leading the company’s development. Suswal, who was previously co-founder and COO of tech startup Standard AI (where he remains on the Board), serves as COO, leading Generation Lab’s operations and commercialization efforts.

“As rejuvenation research expands, we need to focus on identifying the most effective measurement for aging. My passion in healthcare led me to focus on Aging & Regeneration since I was a researcher in Irina’s lab at Berkeley,” said Su. “Generation Lab’s technology has the potential to impact an 8-billion-person global population. We will be a one-stop shop for testing plus clinician recommendations for interventions with the ability to ship personalized products to your doorstep. I’m proud that we’re making it easy for people to take a more proactive role in their healthcare and better understand their risk of aging factors.”

Generation Labs has raised a pre-seed round from Transpose Platform and Sequoia China Founding Partner. The company expects its aging test kits to be available in Q1 2024. To join the waitlist, visit generationlab.co.

About Generation Lab

Generation Lab aims to extend the human healthspan with the first test based on clinically relevant biological evidence that measures individuals’ biological age progression and risk of disease so everyone can live longer, healthier lives. Co-founded by a preeminent aging and longevity researcher from UC Berkeley, Generation Lab combines a simple cheek-swab test with proprietary techniques to definitively measure molecular disbalance that indicates aging and risk of disease. The company works with clinicians who can recommend personalized interventions, and Generation Lab can measure the effectiveness of these over time. Join the waitlist atLink to SendGridgenerationlab.co.

The post SCET Alum and Berkeley Professor Launch Generation Lab for Personalized Anti-Aging appeared first on UC Berkeley Sutardja Center.

]]>
Data-X Alums Create PoliWatch to Monitor Politicians and Their Stocks https://medium.com/berkeleyischool/data-science-students-create-dashboard-to-track-insider-trading-50a3147a724f#new_tab Mon, 29 Jan 2024 18:12:53 +0000 https://scet.berkeley.edu/?p=25358 1 IoAU Lm96CS0FETMaqH4YQ

The post Data-X Alums Create PoliWatch to Monitor Politicians and Their Stocks appeared first on UC Berkeley Sutardja Center.

]]>

The post Data-X Alums Create PoliWatch to Monitor Politicians and Their Stocks appeared first on UC Berkeley Sutardja Center.

]]>
OpenAI, Reinforcement Learning, the Rights of Robots, and… Aliens? AI’s Cambrian Explosion https://scet.berkeley.edu/openai-reinforcement-learning-the-rights-of-robots-and-aliens-ais-cambrian-explosion/ Fri, 26 Jan 2024 18:52:13 +0000 https://scet.berkeley.edu/?p=25348 A sophisticated and imaginative visual representation of the article 'OpenAI, Reinforcement Learning, the Rights of Robots, and… Aliens AI’s Cambrian Explosion" via DALL-E / GPT-4.OpenAI Whiplash One day, there’ll be a movie, or at least a Harvard Business Review analysis, about the board vs. CEO drama that unfolded at OpenAI in the autumn of 2023.  Perhaps the conflict was simply a matter of office politics.  Or, as has been more darkly hinted, perhaps the matter was due to the…

The post OpenAI, Reinforcement Learning, the Rights of Robots, and… Aliens? AI’s Cambrian Explosion appeared first on UC Berkeley Sutardja Center.

]]>

OpenAI Whiplash

One day, there’ll be a movie, or at least a Harvard Business Review analysis, about the board vs. CEO drama that unfolded at OpenAI in the autumn of 2023.  Perhaps the conflict was simply a matter of office politics.  Or, as has been more darkly hinted, perhaps the matter was due to the development of technology that is computer science’s equivalent of a bioweapon: an artificial general intelligence (AGI) breakthrough that ultimately threatens humanity.  

At this point we don’t know the reason behind what happened at OpenAI, and we may never know.  On the AGI possibility, details about OpenAI’s technology have been scant beyond mention of a mysterious “Q*”, and Q*’s burgeoning mastery of basic math is purportedly its key advancement.  Independent of any office politics drama at OpenAI, it’s still crucially important however that we consider the Q* possibility above.  This is not because there’s a sinister AGI lurking out there now, but because the discussion around Q* helps illuminate the current state of AI and the next breakthroughs we can expect to see in the field.  

The Q* & A* Bricolage

So, what’s Q*, and is it somehow related to Q, whatever that is, and perhaps also to A*, whatever that is?  Why should we care about Q and A*, and even more pointedly, why is an ability to do basic math so interesting in the field of AI?  Most important of all, are there long-term societal implications here that go beyond mere breakthroughs in underlying AI reinforcement learning technology?  Before we get to discussing robots and alien life, let’s start with a quick review of the most interesting parts of AI today.

Natural intelligence yields us an ability to understand, reason and plan, enabling our autonomous behavior in complex and highly dynamic contexts, even for things as quotidian as planning our day.  If artificially intelligent systems are to take the next big step and operate autonomously within the complexity of real life, they too need to be able to understand, reason and plan sequenced actions.  An eventual AGI will need to integrate and synthesize knowledge from various domains, combining things like mathematical reasoning with a broader understanding of the world.  Cognitive abilities, such as natural language understanding, sensory perception, social intelligence, and common-sense reasoning will all be vital components in the development of a truly intelligent system.  As of this writing, AGI remains a distant goal.

As powerful as large language models (LLMs) may seem to be, they’re in the end transformer-driven, token-producing statistical machines for predicting the next word.  There’s an aura of intelligence that LLMs bring, but they’re unfortunately incapable of intelligent reasoning and planning.  The ability to reason and plan are hallmarks of natural intelligence, hence these abilities are sought after within AI.  Being able to plan enables systems to set goals, create strategies, and make decisions in complex and dynamic environments.  Math can provide a nice, simple proxy for logical and (multi-step) reasoning abilities.  Perhaps it’s here that Q, A*, the ability to do basic math, and the mythic Q* all come into play.

Pavlov’s Dog

We’ve become very familiar recently with discriminative AI and generative AI.  Reinforcement learning (RL) is the next AI technology which will now become increasingly familiar.  You might have taught your dog to stay off the couch via a scheme of reward (doggie treat) or penalty (“bad dog!”).  That’s the essence of reinforcement learning: a sequence of rewards or punishments that help you map the optimal, multi-step path to a goal.  It’s through reinforcement learning that we can imbue AI with the desired ability to learn sequential decision-making within complex environments, and it’s this ability that unlocks the possibilities of autonomous AI.

The now classic algorithm in the field of RL, Q-learning, was introduced in Christopher Watkins’ 1989 Ph.D. thesis, “Learning from Delayed Rewards”, which included rich references to classically-conditioned learning in animals. In short (the thesis runs to well over 200 pages), the Q-learning algorithm Watkins defined is a reinforcement learning approach that aims to learn a policy, represented by a function Q(s, a).  Q(s, a) estimates the expected cumulative (numeric) reward for taking action “a” in state “s”, and the scoring can guide a system to take optimal actions toward a desired goal.  

Q-learning is particularly useful in problems where the environment is not fully known in advance, hence is “model-free”, and the AI agent must learn through trial and error. It has been successfully applied in multiple domains, including game playing, robotics, and other autonomous systems.  Google DeepMind addressed the problem space of Atari video games using a variant of Q-learning known as a Deep Q Network (DQN; Mnih et al. 2013).  DQN combined the Q-learning algorithm with deep learning techniques, particularly convolutional neural networks, to approximate and learn the optimal action-value function in a continuous state space. DQN was designed to handle complex and high-dimensional input data, making it well-suited for tasks like playing video games or robotic control in dynamic environments.  Indeed, DQN was found to outperform a human expert across three separate Atari games.

There exist also “model-based” approaches to reinforcement learning, with the best known one being DeepMind’s AlphaZero (Silver at al. 2017), which convincingly defeated human experts in chess, Go and shogi.  Both model-free and model-based approaches to reinforcement learning have their own benefits and drawbacks (and might even be combined). Model-free approaches are generally easier to implement and can learn from experience more quickly, but may be less efficient and require more data to reach good performance.  Model-based approaches can be more efficient and require less data, but may be more difficult to implement and less robust against environmental changes.

Have you seen those cool videos of Google DeepMind’s soccer-playing robots (Liu et al. 2019, Liu et al. 2021, Haarnoja et al. 2023) or MIT’s soccer ball-kicking robot, DribbleBot (Ji et al. 2023)?  The robot AI in these cases was built using reinforcement learning.

Might Q* have had something to do with Q-learning, and endowing OpenAI’s technology with the RL-driven ability to learn from and make autonomous decisions within dynamic environments?  Possibly?

Wish Upon A*

Speaking of “possibly”, is there a possible association of A* with Q*?  In the field of graph theory, A* (Hart, Nilsson & Raphael 1968) is a shortest path algorithm that, when presented with a weighted graph, optimally computes the shortest path between a specified source node and a specified destination node.  Combining Q-learning with A* might therefore yield an optimal set of steps to plan and complete an autonomous action.  Practical; this is the sort of efficient, complex, goal-oriented thinking at which natural systems excel.  But might OpenAI’s Q* notation indicate an AI synthesizing of Q with A*?  Who knows.

AI’s Cambrian Explosion

Whether Q* exists or not, and whether it’s some early form of super-intelligence, is in the end irrelevant.  The mythic Q* is important principally because it’s symbolic of the state of current AI technology, which is undergoing a Cambrian Explosion of evolution along every possible axis.  Complementing ongoing advances in hardware technology (Cai et al. 2023, Shainline 2021, John et al. 2020, Christensen et al. 2022) is a dizzying array of advances in AI software (Henaff et al. 2017, Bi & D’Andrea 2023, Hafner et al. 2018, Reed et al. 2022, Kwiatkowski & Lipson 2019), all of which are combining to intrude ever more upon what previously seemed the exclusive realm of natural intelligence.  AI’s Cambrian Explosion is allowing it to now definitively escape the data center, and increasingly make its appearance in our physical spaces.

When it comes to big, missing pieces of artificial intelligence – planning and reasoning in complex environments – the Cambrian Explosion is in full spate, with a virtually endless stream of breakthroughs leveraging underlying LLMs (Dagan et al. 2023, Dagan et al. 2023, Imani et al. 2023, Liu et al. 2023, Silver et al. 2023, Lewkowycz et al. 2022, Zhao et al. 2023, Murthy et al. 2023, Romera-Paredes 2023, Trinh et al. 2024).  For complex reasoning, there’s also been  the Chain-of-Thought prompting of LLMs (Wei et al. 2022), the derivative Tree-of-Thoughts (Yao et al. 2023), and now even Graph-of-Thoughts (Besta et al. 2023).

Where else is the Cambrian Explosion in AI currently manifest?  How about multi-player gaming versus human competition, long a benchmark for AI, with noteworthy results achieved in the game of Diplomacy (Bakhtin et al. 2022) and elsewhere (Schmid et al. 2023, Hafner et al. 2023, Brown & Sandholm 2019).  In the former work, an artificial agent named Cicero, was applied to a game “involving both cooperation and competition” with “negotiation and coordination between seven players”.  “Cicero [integrated] a language model with planning and reinforcement learning algorithms by inferring players’ beliefs and intentions from its conversations and generating dialogue in pursuit of its plans.  Across 40 games of an anonymous online Diplomacy league, Cicero achieved more than double the average score of the human players and ranked in the top 10% of participants”.

Any Q* super-intelligence evidence within this Cambrian Explosion?  OpenAI has published work both in multi-step mathematical reasoning (Cobbe et al. 2021) and process supervision (Lightman et al. 2023).  And as Anton Chekhov almost said, “One must never place super-intelligence on the stage if it isn’t going to go off”.  For what it’s worth, Ilya Sutskever, OpenAI’s co-founder and chief scientist, has now put his focus on solving the problem of superalignment.  I.e., aligning AI to stay within humans’ intended goals when that AI is deemed super-intelligent.

Brave New World

(With an intentional Aldous Huxley reference).  So what does it all mean?  Are there important societal implications contained within AI’s Cambrian Explosion?  Given all of the breakneck advances in machine intelligence per above, one of our biggest societal questions we face will be what happens when the disembodied voice of AI, now simply instructing you on how to reach your driving destination, becomes the embodied voice, passing you in the hallway in robotic form, engaged in some RL-driven autonomous task.

We humans have hierarchies for everything.  For example, when we assign rights, humans stand at the apex of the rights hierarchy, with other mammals below us, and fish, insects and plants arrayed beneath.  These rights are roughly apportioned via classification of intelligence.  While much discussion today has focused on whether or when AI achieves sentience and AGI, the bigger, more immediate question is already here: where should human society insert artificially intelligent “life” which is “human enough” into our moral hierarchy?  Full AGI is many years away, but AI’s Cambrian Explosion brings us systems that we find ever more engaging in ever more places with every passing day.  “Intelligent (human) enough” AI robots, enabled by reinforcement learning, are near at hand, and these bring up a very key issue of anticipatory technology governance.  What rights should robots have?

We’re already very familiar with purpose-built robots such as the iRobot Roomba.  We can expect similar task-specific autonomous robots to continue their encroachment (Shafiullah et al. 2023, Fu et al. 2024, Chi et al. 2023) into applications in home automation, agriculture, manufacturing, warehousing, retail and logistics.  Given that our built environment has evolved to serve humans – bipedal, bimanual, upright-standing creatures of a certain height – we can anticipate that further advancements in autonomous robots will increasingly resemble Star Wars’ C-3PO and not just the saga’s R2-D2 or BB-8.

Generative AI chatbot systems have shown a remarkable ability of connecting with their human users (Skjuve 2021, Tidy 2024).  (Dismayingly, chatbots can be so humanized that they can even be taught to lie: Hubinger et al. 2024).  We will see a day when these human-chatbot connections are embodied within quite intelligent, autonomous robots.  As these robots become increasingly humanoid (see the DARPA Robotics Challenge entrants, or Enchanted Tools’ Mirokai or Engineered Arts’ Ameca; Merel et al. 2018) and increasingly ubiquitous (Tesla Bot is slated to cost less than a car), how should we place moral value on an autonomous robot?

Though a theoretical question at present, the moral rights of robots will need soon to be inserted into society’s discussion.  We’ve recently begun to consider the implications of virtual crimes committed against physical persons.  We will also have to grapple with the societally corrosive effects of physical crimes committed against “virtual persons”, aka. robots.

Put more bluntly, will it be an equal offense to take a sledgehammer to your coworker’s PC as it is to do the same to a humanoid robot which is able to remonstrate, using spoken natural language pleas, with its attacker?  Are autonomous robots – possessing symbolic qualities “just human enough”, with no sentience or AGI required – to be treated merely as capital-asset machines or do they possess rights beyond that, meriting a higher place in human society’s hierarchy of rights?  AI’s Cambrian Explosion is accelerating our need to confront this very question.

Where Is Everybody?

Beyond autonomous robots, where else might AI’s Cambrian Explosion lead?  We can absolutely anticipate the integration of human brains with artificial ones.  This prospect is no longer mere science fiction.  Advances in neuroprosthesis (Metzger et al. 2023, Moses et al. 2021) and brain-machine interface (BMI) technologies have demonstrated the ability to perform just these sorts of integrations.  Companies such as MindPortal promise to deliver “seamless telepathic communication between humans and AI”, while Neuralink has won FDA approval for the human study of brain implants.  How might this ramify into our next societal question?  Well, if it’s illegal for chemically-enhanced athletes to compete with the unenhanced, should electronically-enhanced humans be allowed to do the same?  Utilize a BMI and sit for the paramedic’s exam?  Defend a Ph.D. thesis?  Argue a case in court?  This question too will need to be confronted one day.

Where might the AI Cambrian Explosion eventually culminate?  Consider that we carbon-based, biomass-consuming beings have now invented silicon-based, electricity-consuming beings, beings whose intelligence will one day surpass our own.  Is this perhaps the fate of every advanced civilization in the Universe?  The triumph of a few score years of technology evolution over a few billion years of natural evolution?  Physicist Enrico Fermi famously coined the Fermi Paradox, asking “Where is everybody?” when referring to the lack of direct evidence of alien life.  Maybe advanced alien civilizations everywhere are fated to suffer the same outcome, with machine-based intelligence supplanting natural intelligence.  Unlike natural beings, artificial ones have no biological need to conquer, exploit, and spread their genes.  Alien AI may just cast a curious electronic eye at our small planet, with its primitive technologies, but have no need to traverse deep space and greet us with ray guns.  AI’s Cambrian Explosion answer to the Fermi Paradox?  “Everybody” is a computer.

The post OpenAI, Reinforcement Learning, the Rights of Robots, and… Aliens? AI’s Cambrian Explosion appeared first on UC Berkeley Sutardja Center.

]]>