Microsoft AI Teams will soon tell your boss where you are. Starting December 2025, Teams can automatically detect when you connect to your companyâs Wi-Fi and update your location to âin the office.â It sounds like a small feature. It isnât. Location tracking through workplace networks is the newest frontier in digital surveillance, and itâs coming through your collaboration software. Microsoft says the feature is opt-in. That is very good. But, that decision will rest largely with employers and admins, not the average employee trying to meet deadlines. If you work for a Microsoft-using organization, now is the time to ask: Is our company planning to activate this feature? Has consent been properly documented? If you represent a union, this deserves to be on your next agenda. The GDPR and UK Data Protection Act require transparency, necessity, and proportionality for any location tracking. Under the EU AI Act, this may also fall under high-risk processing of biometric and personal data for workplace management. Employers must conduct a fundamental rights impact assessment before rolling it out. This isnât paranoia. It is risk management, employee rights, and compliance. Workplace tracking without explicit, informed consent can violate privacy law in multiple jurisdictions, and it may open employers to liability under both GDPR and the EU AI Actâs risk provisions. If your organization uses Microsoft Teams with minors, such as schools or training programs, the stakes are even higher. Hereâs what to do as an employee, parent, or guardian: ð¹ Ask your IT administrator if âlocation autodetectionâ is enabled. ð¹ Request a copy of the companyâs Data Protection Impact Assessment (DPIA). ð¹ Ensure opt-in consent is voluntary and revocable. ð¹ Check that logs are deleted regularly and not used for performance evaluation. Transparency is not optional. #DigitalSovereignty #WorkplacePrivacy #AICompliance #GDPR #MicrosoftTeams Image source: SlashGear, https://lnkd.in/di5WvY2e From Microsoft: Microsoft 365 Roadmap: https://lnkd.in/dYc3N9TX Microsoft Learn (Configure auto-detect of work location): https://lnkd.in/dtEkYNqB
Workplace Surveillance Ethics
Explore top LinkedIn content from expert professionals.
-
-
ð§ðµð² ð¦ðð¿ðð²ð¶ð¹ð¹ð®ð»ð°ð² ð§ð¿ð®ð½: ð ð¼ð»ð¶ðð¼ð¿ð¶ð»ð´ ðð¼ð¼ððð ð©ð¶ðð¶ð¯ð¶ð¹ð¶ðð, ð²ð¿ð¼ð±ð²ð ðð¿ððð. Over the past few months, more companies have quietly rolled out new monitoring systems â tracking mouse movements, keystrokes, websites, âidle time,â and even screenshots. ð§ðµð² ð¶ð»ðð²ð»ð? Improve productivity, tighten accountability, optimise workflows. ð§ðµð² ð¼ððð°ð¼ðºð²? A workplace culture that feels more watched than supported. Hereâs the paradox leaders are missing: ðð¤ð£ðð©ð¤ð§ðð£ð ðð¤ð¤ð¨ð©ð¨ ð«ðð¨ðððð¡ðð©ð® â ð£ð¤ð© ð©ð§ðªð¨ð©. Employees may be online longer, but theyâre not necessarily more engaged. Surveillance signals a lack of confidence, and people respond by doing only what gets measured. ðð§ððð ðð£ð ððð©ðð«ðð©ð® ðð¤ðð¨ ð£ð¤ð© ð£ðððð¨ð¨ðð§ðð¡ð® ð¢ððð£ ð©ð§ððð ðð£ð ðð¢ð¥ððð©. A green dot on Teams does not equal performance. When companies measure time-at-keyboard more than outcomes, employees shift from value-creation to âvisibility theatre.â ððð ðð¢ð¤ð©ðð¤ð£ðð¡ ðð¤ð¨ð© ðð¨ ð§ððð¡. Workers report: ⢠feeling micromanaged ⢠reduced autonomy ⢠lower morale ⢠rising anxiety and distrust Ironically, the very tools meant to improve productivity may be undermining it. Modern work isnât defined by minutes of activity â itâs defined by: ⢠problem-solving ⢠creativity ⢠judgment ⢠ownership ⢠outcomes These canât be captured by keystroke logs. ð§ðµð² ð°ð¼ðºð½ð®ð»ð¶ð²ð ððµð®ð ðð¶ð¹ð¹ ðð¶ð» ð®ð¿ð²ð»âð ððµð² ð¼ð»ð²ð ðð¿ð®ð°ð¸ð¶ð»ð´ ð²ðºð½ð¹ð¼ðð²ð²ð⦠ð§ðµð²ðâð¿ð² ððµð² ð¼ð»ð²ð ð²ðºð½ð¼ðð²ð¿ð¶ð»ð´ ððµð²ðº.
-
The dark side of employee monitoring: trust, value, and agency ðµð»âï¸ð« ð¤ A study found that 80 percent of top US employers use tech to track workers' productivity, often in real-time. Does our company monitor our fellow workers and us with high-tech software? Do we even know? â¡ï¸ The missed side of value Employee monitoring encourages the mentality that the only valuable hours are those we spend in front of our computers; instead, we need to reframe what productivity is. â¡ï¸ A trust issue "If we can't see our people, how do we know what they're doing?" Digital monitoring is an extreme form of micromanagement, a need for control resulting from a lack of trust that when people are not in the office, they are not "productive." â¡ï¸ Monitoring can backfire Research suggests that employee monitoring can backfire, making people feel like they have no agency and increasing the prevalence of the behaviors these systems want to deter. â¡ï¸ Rethinking knowledge work and value People may work hard to prove they are working instead of doing valuable work, constantly demonstrating their hard work. ð± So, how can we create cultures where people are trusted to manage their time and produce quality work? â¡ï¸ The potential of people analytics If we can solve the trust and transparency issues, people analytics could help employees use their data to better understand and improve their work patterns. Illustration by me ð Extract from an article by Rachel Botsman. Link to the complete source in the first comment ð #productivity #trust #management
-
Can law help build ethical AI systems by design, or does ethics resist formalization? In earlier posts, I argued that ethics is about reasoned judgement under uncertainty, and that regulation can create clarity where organizations otherwise struggle. With todayâs post I want to connect law and ethics to technical implementation; specifically, the role that law can play in facilitating ethical data practices by design. Privacy professionals are well familiar with this concept, as epitomized by Art. 25 GDPR which requires organizations to implement data protection by design and default. But as Prof. Christian Djeffal outlines in a recent article, law by design has since become a fixture of EU law: Law by design translates legal and ethical goals into technical and organizational obligations. At the same time, it deliberately leaves discretion as to implementation. â¡ï¸ What law can do well Frameworks like the GDPR and the AI Act show how law can meaningfully support ethical data practices by design: â They shape how organizations structure the lifecycle of data processing, starting with an initial assessment of the necessity and proportionality of processing. â They require organizations to clearly define roles and responsibilities from the beginning, and document any relevant risks. â They encourage organizations to seek diverse perspectives when developing and deploying new technologies, thus reflecting the inherently interdisciplinary nature of sociotechnical design. â¡ï¸ What this means for ethical AI Ethics is no longer a nice-to-have when it is hardcoded into legal requirements. As I argued in my master's thesis, the AI Act, for instance, translates ethical obligations into technical requirements, specifically mandating: â Respect for human autonomy by requiring human oversight of the development and deployment of AI systems. â The prevention of harm through accuracy, robustness, and security. â Fairness and explainability through robust data governance and record-keeping. â¡ï¸ Where law reaches its limits At the same time, law by design does not resolve any dilemmas or trade-offs. Ethical behavior is not a technological fact, but the result of human deliberation. Procedure matters just as much as outcome, and legal requirements alone do not tell organizations how to weigh competing priorities in practice. â¡ï¸ What this means for leaders on ethical AI Law by design is not a shortcut to ethical AI. But it can create the right incentives. Leaders should: â Leverage law by design requirements as a foundation for responsible data processing. â Facilitate ethical deliberation to translate law by design requirements into concrete deliverables. â Open up the room for innovation by, in Djeffal's words, "prompting the development of solutions where none yet exist." Link to Djeffal's article: https://bit.ly/45Sj76P. #ResponsibleAI #AIGovernance #DataEthics #Leadership
-
ðï¸ ð¡ One of the countryâs top compliance training companies recorded the conversations of its employees by turning their laptops into covert listening devices while they were at home, in a case that tests the boundaries of workersâ privacy. Victorian police are investigating claims that Safetrac breached the state surveillance laws after chief executive Deborah Coram admitted in legal documents that her company recorded the audio and screens of select members of its staff, who work from home. The idea of recording workersâ conversations, let alone their conversations at home, is unusual. Given the habit of employeesâ home life to leak into their work life during remote working, the risks are extraordinary. Unions are pushing for new laws to guard against unreasonable or excessive monitoring in the workplace and state Labor governments are considering urgent reforms to update outdated surveillance laws for the WFH era. State work health and safety laws are also starting to recognise that surveillance is a potential psychosocial hazard. For the layman, privacy has long been considered an individual right. You waive the rights to your data or you consent to workplace monitoring. But cases such as Safetrac show that, much like work health and safety laws, privacy can also be understood as a collective right. Privacy is relational. Surveillance can not only affect you but those around you, including family members, friends and other third parties. â Is it ever okay to record the audio and screens of employees when they are working from home, or other locations outside of the traditional workplace? As always, keen to hear your thoughts, opinions and experiences. ð Link to full AFR article available in the comments section below ð WorkFLEX-Australia Author: David Marin-Guzman The Australian Financial Review #wfh #employeesurveillance #futureofwork
-
The most dangerous lie in business today:  'We need to monitor our people to ensure productivity.'  A friend CEO shared his 'productivity tracking' results with me.  The data was shocking when I saw it! Their most monitored team? â Highest turnover rate. â Zero innovation. â Lowest output.  And here's a controversial take ahead ð¥ (which I shared with him)  Every keystroke you track Every minute you monitor Every bathroom break you log...  You're not measuring productivity. You're documenting distrust. (a bit black or white, but still...)  So, what actually drives performance?  1/ Crystal Clear Expectations â Set measurable outcomes â No gray zones on deadlines â Define what winning looks like  2/ Trust by Default â Zero surveillance â Focus on deliverables â Celebrate achievements, not hours  3/ Adult Conversations â Quality check-ins â Address issues head-on â Solutions over surveillance  Companies still playing digital babysitter? They're losing the war for talent. (And their best people are already interviewing elsewhere)  The future belongs to companies that: â Trust first â Measure impact â Enable autonomy  The harsh reality? Your turnover rate tells the real story. P.S. Later from that same CEO: "Deleted a lot of that monitoring. Our new productivity metric? Trust."  ð Are you brave enough to lead with trust in your life?  â ð Share if you're committed to building better workplaces ð¯ Follow for more unfiltered leadership insights
-
ð¨ [AI POLICY] Big! The U.S. Department of Labor published "AI and Worker Well-being: Principles and Best Practices for Developers and Employers," and it's a MUST-READ for everyone, especially â¡ï¸ employers ⬠ï¸. 8 key principles: 1ï¸â£ Centering Worker Empowerment "Workers and their representatives, especially those from underserved communities, should be informed of and have genuine input in the design, development, testing, training, use, and oversight of AI systems for use in the workplace." 2ï¸â£ Ethically Developing AI "AI systems should be designed, developed, and trained in a way that protects workers." 3ï¸â£ Establishing AI Governance and Human Oversight "Organizations should have clear governance systems, procedures, human oversight, and evaluation processes for AI systems for use in the workplace." 4ï¸â£ Ensuring Transparency in AI Use "Employers should be transparent with workers and job seekers about the AI systems that are being used in the workplace." 5ï¸â£ Protecting Labor and Employment Rights "AI systems should not violate or undermine workersâ right to organize, health and safety rights, wage and hour rights, and anti-discrimination and antiretaliation protections." 6ï¸â£ Using AI to Enable Workers "AI systems should assist, complement, and enable workers, and improve job quality." 7ï¸â£ Supporting Workers Impacted by AI "Employers should support or upskill workers during job transitions related to AI." 8ï¸â£ Ensuring Responsible Use of Worker Data "Workersâ data collected, used, or created by AI systems should be limited in scope and location, used only to support legitimate business aims, and protected and handled responsibly." â°â⤠This is an essential document, especially when AI development and deployment occur at an accelerated pace, including at the workplace, and not much is said regarding workers' rights and labor law. â°â⤠AI developers should have labor law and workers' rights in mind when building AI systems that will be used in the workplace. Additional guardrails might be required. â°â⤠Employers should be aware of their ethical and legal duties if they decide to use AI in the workplace. AI-powered systems are not "just another technology" and present specific risks that should be tackled before deployment, especially in the workplace. â¡ï¸ Download the document below. ðï¸ STAY UP TO DATE. AI governance is moving fast: join 36,900+ people in 150+ countries who subscribe to my newsletter on AI policy, compliance & regulation (link below). #AI #AIGovernance #AIRegulation #AIPolicy #WorkersRights #LaborLaw
-
Remote work has created a new obsession: productivity tracking software that monitors keystrokes, tracks mouse movements, and measures "active time." Â But most companies are measuring the wrong things. Â Someone just solved their company's biggest client problem in 20 minutes of thinking. Then they went for a walk to clear their head and plan what comes next. Â The productivity software flagged them as "unproductive." Meanwhile, a colleague spent eight hours clicking through spreadsheets, moving their mouse, and looking busy. The software thinks they're amazing. Â Companies are measuring activity, not results. Motion, not progress. Hours logged, not problems solved. Â Productivity isn't about being busy. It's about moving things forward. Â The best remote workers know when to step away from the screen to think clearly. Their best ideas come during walks, conversations, or while doing something completely different. Â But productivity software sees this as "inactive time." Â If a company needs to track every keystroke to know if someone's working, they've either hired the wrong people or created the wrong culture. Â Trust and results beat surveillance every time. Â What's your experience with remote work, do these tracking tools actually help?
-
You canât monitor your way to a high-performance culture. If a team only performs when they are being watched, you don't have a culture; you have a surveillance state. And in the modern workplace, surveillance is the fastest way to kill the very innovation youâre trying to measure. Real leadership happens in the "shadows", itâs what your team does when the lights are off. Itâs the difference between a team that ticks boxes because they have to, and a team that creates value because they want to. The rhetorical reality of the boardroom often misses this: ð¢ The Watcherâs Paradox: People donât give their best when theyâre watched; they give their best when theyâre trusted. ð¢ The Safety Multiplier: When people feel safe, they perform better. It isn't a "soft" sentiment; itâs a biological performance requirement. ð¢ The Invisible Engine: Culture isn't found in your workspace or equipment. Itâs found in the "smell" of your office, the rituals, the unprompted collaboration, and the way decisions are made when you aren't there to mediate. As leaders, we have to ask ourselves: Are our Structures and Processes designed to catch mistakes, or are they designed to foster authority and development? If you strip away the office décor and the employee handbook, what remains of your culture? If the answer is "silence," then the trust isn't there. High performance isn't forced through a lens; itâs unlocked through a sense of belonging and safety. Have you noticed a shift in output when youâve stepped back and leaned into trust rather than tracking? Follow Rob Gilder for reflections on leadership, empowerment, and building healthy team cultures.
-
So today we have another example of a business leader saying and doing something so unbelievably stupid in relation to WHS that my desk has an indentation where my head has been hitting it since reading the reporting in the AFR. A business called Safetrac turned on audio surveillance on the computers of its staff who were working from homeâ¦without clear policy, without telling them and definitely without anything approaching consultation. Apparently Safetrac deployed Teramind to monitor âunderperformers,â enabling laptop microphones from mid-April to early June, and only expanded its four-sentence surveillance policy at the end of June. On 12 August, WorkCover agent Allianz accepted a mental-injury claim from a worker who developed anxiety after discovering the audio surveillance. Victoria Police is reportedly investigating. This is not a grey area of etiquette. It is a failure of process, consultation and risk management. In Victoria, employers must consult with employees and HSRs when identifying or assessing hazards, when deciding on risk controls, and when monitoring the health of employees and workplace conditions. Rolling out intrusive monitoring, especially audio capture, undoubtedly triggers those duties. Consultation isnât a courtesy; it is a statutory requirement. But wait it gets worseâ¦. Safetracâs updated policy reportedly asserts that monitoring âin accordance with employment contracts, company policies, and relevant legislation are not considered psychosocial hazards.â I will gladly buy a decent bottle of wine for any of my contacts who can point to the law that allows CEOâs to arbitrarily define what is and what is not a hazard. Thankfully we live in a society where you canât just do stuff to people and arbitrarily decide that what youâre doing is not evil, that what youâre proposing doesnât have risks and that in your enlightened and lofty view people should be happy about your decisions. Psychosocial hazards are determined by the nature of work and its impacts, assessed through a risk process with worker consultation, not by policy wording. Attempting to define surveillance out of âhazardâ status misses both the law and the science. If the AFR reporting is accurate, hereâs what good governance should have required before any deployment: A formal psychosocial risk assessment with workers and HSRs, and clear, documented consultation. A proportionate purpose test (what problem are we solving?), and strict minimisation (no audio by default). Transparent, specific notices and informed consentânot a retrofit policy. Compliance isnât about how cleverly you can write a policy after the fact. Itâs about whether your decisions respect the law, your people, and the risks you create. On all three counts, this approach fails the test.