top of page

What Experts Said

Dhananjai Senthil Kumar

TECHNICAL EXPERT

has a background in robotics starting from high school (FIRST Robotics Challenge) and continuing into college research and projects, accumulating roughly 5 years of intermittent experience. He is currently working on developing a “Locobot”, essentially a Roomba base with an arm and sensors (camera, Lidar), intended to function as a robotic waitress in restaurants. 
 

Jai gave us his opinions on robotics development and security considerations. He is currently developing a “Locobot”—a modified Roomba base equipped with an arm and sensors (camera, Lidar) designed to function as a robotic waitress in restaurants.

The project, still in the R&D phase, focuses on environmental mapping, path planning, human interaction, and object manipulation (e.g., picking up dishes). While functionality is the primary focus, Dhananjai has implemented basic security measures: data is processed locally (onboard or on-premise), serialized rather than stored as raw images, and accessed via secured local networks with SSH and potential two-factor authentication (2FA). However, advanced cybersecurity hardening, such as encryption or automated updates, remains a future goal.  
 

Security challenges are acknowledged but not yet prioritized. The system relies on Linux user permissions and physical oversight by restaurant owners to mitigate tampering risks. A kill switch and fail-safe protocols ensure the robot stops safely during failures or network drops. Dhananjai admits vulnerabilities exist, particularly in physical tampering or sensor manipulation, but believes current measures suffice for the R&D stage.

Future plans include a mobile app for monitoring and updates, alongside potential AI-driven adaptations inspired by autonomous vehicles and GDPR principles (e.g., data minimization).  

Liability for security breaches is seen as shared between developers (for software flaws) and users (for network/physical security). While terms and conditions are planned, specific legal protections for consumers remain undefined. Dhananjai advocates for government regulations prioritizing physical safety standards (e.g., reliability metrics like "five nines") and data privacy, though he notes cybersecurity is secondary in his current phase.  
 

The interview highlights a pragmatic, iterative approach to robotics development, where functionality precedes advanced security. Dhananjai’s project reflects broader trends in emergent robotics: balancing innovation with nascent safeguards, while looking to industries like IoT and autonomous vehicles for future best practices. His focus on local data processing and modular upgrades underscores a cautious but forward-looking stance, aiming to eventually align with regulatory and consumer expectations as the technology matures.  

Brendan Bernicker

PRIVATE SECTOR LEGAL + CONSUMER PROTECTION EXPERT

now a Visiting Assistant Professor at Penn State Law and holds a J.D. degree from Yale. One domain of his research focuses on AI law. Before joining the law faculty, Professor Bernicker had been working as a legal practitioner in the private sector. With a mixed background with programming skills, he has rich experience advising startups and small businesses on a range of matters, including data privacy compliance, litigation related to technology law, and even AI system development.
 

With the rapid advancement of security technologies and the maturation of the cybersecurity industry, the increasing availability of skilled professionals has led to a gradual reduction in corporate compliance costs related to security measures. Compliance costs are decreasing, and the risk of being sued is increasing, thus, companies tend to comply in advance to avoid risks. In addition to safeguarding their commercial interests, enterprises strategically adopt cybersecurity protocols to mitigate data leakage risks and sustain a competitive advantage in data protection. 
 

If a robot causes data leakage due to a hacker attack, responsibility will generally be allocated to the manufacturer and operator side instead of the consumers. Under the joint liability regime of tort law, the specific determination should be based on the specific case, involving the manufacturer, operator, or user. Cross-border data flows face legal conflicts (such as China’s data localisation vs. EU cross-border compliance), and companies usually resolve supply chain issues through contract negotiations.
 

If a robot causes physical harm due to product quality issues, product liability - strict liability applies, but psychological harm and emotional compensation are generally not covered by compensation. Data loss may be claimable, but businesses often use disclaimers to avoid liability. Consumers often overlook cybersecurity risks, and laws must ensure contract terms are fair (e.g., standard contract templates). Business bankruptcy may result in robots being unable to receive maintenance, harming consumer rights, but legal intervention in such market risks is limited.
 

Companies need to balance cybersecurity measures with user experience, as excessive protection may reduce product usability. Disclaimers and data ownership clauses tend to favour commercial interests, but must avoid harming consumer rights. The EU's GDPR and AI Act have been criticised for being enacted too early and imposing high compliance costs. In contrast, the US's ‘wait-and-see’ regulatory approach is considered more flexible and pragmatic.

Consumers generally underestimate cybersecurity risks (e.g., dark web data breaches), while governments and businesses may exploit vulnerabilities for profit.

Technical solutions must adapt to multi-jurisdictional legal requirements (e.g., data encryption, localization). It is recommended that regulations be dynamically adjusted to align with technological advancements, avoiding premature codification, and promoting collaboration between technology and law to lower compliance barriers for businesses.

Desirae Satterlee

CONSUMER

is a JD student at Penn State. She owns an Ionvac robot vacuum and other smart home devices. While comfortable with basic data collection for product functionality, she has concerns about Wi-Fi-related data privacy and potential misuse. 
 

Desirae Satterlee, a consumer with no formal robotics background, uses an Ionvac robot vacuum alongside other smart home devices. She chose the product primarily based on price and availability at a trusted retailer like Walmart. While comfortable with basic data collection (e.g., apartment layout and cleaning habits) for product improvement, she expresses concern about potential Wi-Fi data misuse, especially after reflecting on security vulnerabilities in other robotic devices.  


Desirae admits she didn’t read the terms and conditions during setup and was unaware of the specifics of data collection until prompted. She believes manufacturers and developers, not consumers, should bear responsibility for security flaws, as these devices are marketed as user-friendly. Though her current vacuum has no cameras and poses minimal perceived risk, she acknowledges that transparency, regular security updates, and third-party verification would be more critical for advanced robots (e.g., those with cameras or butler functions).  


The conversation heightened her awareness of cybersecurity, prompting her to consider researching the security reputations of brands before future purchases. While price and convenience remain her top priorities, she now sees value in independent certifications to ensure safer consumer choices. Her stance reflects a common consumer trend: balancing practicality with growing, albeit reactive, concerns about data privacy and device security.

Mingrui Fu

CONSUMER

currently resides in Beijing, China, and holds a Ph.D. from Beijing University of Technology. Based on extensive experience using smart home devices (such as smart surveillance cameras, robotic vacuum cleaners, automatic pet feeders, smart speakers, smart hubs, smart locks, and thermostats), this individual was interviewed as a consumer with deep personal insights into the topic.
 
Dr. Fu, as a smart home enthusiast, primarily considers how devices can enhance convenience when making purchases. During actual usage, he gradually becomes aware of potential cybersecurity risks—for instance, after installing a smart camera, he pays close attention to reports about privacy breaches involving such devices. While using smart control panels and door locks, he also harbors concerns about network security, though he has never experienced any tangible losses. He believes these risks remain manageable and do not warrant excessive anxiety.  
Regarding China's rapidly advancing humanoid robot industry, Dr. Fu maintains cautious optimism. He recognizes the potential of such products in eldercare, medical assistance, and household chores, but is particularly concerned about cybersecurity. He argues that manufacturers should implement physical defense mechanisms alongside software protections to fully eliminate security vulnerabilities. As a consumer, he is willing to pay a premium for more secure models but prefers to wait until the technology matures before purchasing. Meanwhile, he also advocates for better consumer education, raising awareness about cybersecurity risks while providing practical guidance on risk mitigation. 
Dr. Fu emphasizes that, given the technical complexity of humanoid robots, regulatory bodies should enforce stricter oversight. However, he also expressed concerns about policymakers. He worries that due to insufficient understanding of the technology, current laws and regulations may fail to adequately address practical needs, resulting in regulatory loopholes or inadequate oversight. This "technology-policy" knowledge gap could undermine the effectiveness of regulatory systems, exposing emerging technologies to risks of delayed or misaligned regulation.
 
If a robot causes physical harm due to product quality issues, product liability - strict liability applies, but psychological harm and emotional compensation are generally not covered by compensation. Data loss may be claimable, but businesses often use disclaimers to avoid liability. Consumers often overlook cybersecurity risks, and laws must ensure contract terms are fair (e.g., standard contract templates). Business bankruptcy may result in robots being unable to receive maintenance, harming consumer rights, but legal intervention in such market risks is limited.
 
Companies need to balance cybersecurity measures with user experience, as excessive protection may reduce product usability. Disclaimers and data ownership clauses tend to favour commercial interests, but must avoid harming consumer rights. The EU's GDPR and AI Act have been criticised for being enacted too early and imposing high compliance costs. In contrast, the US's ‘wait-and-see’ regulatory approach is considered more flexible and pragmatic.
Consumers generally underestimate cybersecurity risks (e.g., dark web data breaches), while governments and businesses may exploit vulnerabilities for profit.

Technical solutions must adapt to multi-jurisdictional legal requirements (e.g., data encryption, localization). It is recommended that regulations be dynamically adjusted to align with technological advancements, avoiding premature codification, and promoting collaboration between technology and law to lower compliance barriers for businesses.

RUDAN CHENG

ACADEMIC EXPERT

Doctor of Law from Peking University (jointly trained at the Max Planck Institute for Comparative Public Law and International Law, Germany), is an Associate Professor and Master's Supervisor at the School of International Law, China University of Political Science and Law. Her academic focus was primarily on international trade and law & economics. In recent years, she has begun to pay attention to law & technology and AI regulations.
 

Professor Chen draws critical parallels with existing IoT and smart home vulnerabilities while examining technical challenges, regulatory shortcomings, and future mitigation strategies. Humanoid robots face three primary security threats: data security risks where sensors collecting visual, tactile, and voice data are vulnerable to interception or tampering during transmission and storage; control system vulnerabilities where AI decision-making algorithms could be maliciously manipulated; and privacy breaches in human-robot interaction through voice command hijacking or biometric data theft.

While current technical solutions include real-time monitoring systems and blockchain-secured communication protocols, significant gaps remain in penetration testing and long-term reliability assessments, with core technologies like high-precision positioning still under development.

Professor Chen mentions a critical lack of unified security standards across the industry, with varying hardware interfaces and communication protocols among manufacturers complicating security synergies. China's MIIT has introduced 2023 guidelines to establish a safer industrial chain, but specific technical standards remain incomplete. Globally, regulatory frameworks lag behind technological advancements, with no dedicated robot safety laws currently existing - oversight relies instead on patchwork applications of IoT and AI regulations like EU AI Act and the GDPR. Additional complexities emerge from unresolved ethical and liability questions regarding accountability for autonomous robot decisions. Key systemic barriers include cost-safety tradeoffs that incentivize security compromises, insufficient interdisciplinary collaboration between relevant sectors, and a shortage of specialized professionals like “robot security engineers.” 
 

To address these challenges, Professor Chen proposes a multi-pronged approach focusing on technological, standardization, regulatory, and ecosystem improvements. This includes developing hardware-level security solutions like trusted execution environments, establishing industry-wide security certifications through organizations like the China Academy of Information and Communications Technology, enacting clear liability laws distinguishing manufacturer and user responsibilities, and fostering cross-industry collaboration through training programs. The report concludes that achieving secure human-robot coexistence demands synchronized progress across technology innovation, standardized governance, and international regulatory cooperation - a challenge as complex as the robots themselves, requiring coordinated efforts from manufacturers, policymakers, and security experts to build a trustworthy human-robot ecosystem.

© 2035 by Autono. Powered and secured by Wix

bottom of page