Please answer all the 4 questions with 500 words.
1. Write a comparative analysis from
the below articles (ethical issues and ethical issues 1) noting the
similarities and differences.
2. Compare the information in those
articles to the materials in Ch14 PPT. Does the premise of
those articles support the overall theme of the materials in Ch14
PPT? Why or why not?
3. Discuss what you learned from
those articles. In your discussion, give example(s) of how your
organization handles ethic concerns as they relate to information
management.
4. Review this video
, and within your posting above, give your overall thoughts
on employee monitoring.
IT for Management: On-Demand Strategies for Performance, Growth, and Sustainability
Eleventh Edition
Turban, Pollard, Wood
Chapter 14
Ethics, Privacy, and Sustainability
Learning Objectives (1 of 4)
2
Copyright ©2018 John Wiley & Sons, Inc.
IT Ethics
Predicting People’s Behavior
Predicting people’s behavior is big business, but companies may face backlash from customers or be subject to investigations or fines.
Mobile Apps and Risky Behaviors
93% top 200 free iOS & Andriod apps exhibited at least one risky behavior.
Apple policy prohibits user information gathering without permission, but countless 3rd party apps are unregulated.
3
Copyright ©2018 John Wiley & Sons, Inc.
Mobile Apps and Risky Behavior
Risky Behaviors
Location tracking
Accessing the device’s address book or contact list
Identifying user or phone unique identifier (UDID)
Recording in-app purchases
Sharing data with ad networks and analytics companies
Twitter, Foursquare, and Instagram routinely gather information from personal address books and other places on your phone.
4
Copyright ©2018 John Wiley & Sons, Inc.
Google Street View
Risky Behavior
Wardriving
Driving around sniffing out and mapping the physical location of the world’s Wi-Fi routers (see Wi-Spy).
Open Wi-Fi Networks
Non-password protected routers that provide access over wireless networks.
The FCC posted, “…collecting information sent over Wi-Fi networks clearly infringes on consumer privacy.”
5
Copyright ©2018 John Wiley & Sons, Inc.
Additive Manufacturing Dilemmas
3D Printing
Depositing tiny layers of material to create computer-assisted design and/or computer-assisted manufacturing blueprints.
Bioprinting
Using DNA to 3D print human body parts using bioprinting technology.
6
Copyright ©2018 John Wiley & Sons, Inc.
IT Ethics
By avoiding illegal conduct, do companies also act responsibly? Explain your Answer
What types of companies can benefit from predicting people’s behavior?
When is predicting people’s behavior a violation of privacy? Give an example.
When is predicting people’s behavior not a violation of privacy? Give an example.
What are the ethical challenges attached to 3D printing and 3D bioprinting?
Research the current debate about 3D printing and bioprinting.
7
Copyright ©2018 John Wiley & Sons, Inc.
Suggested Answers:
1. No. What is legal is not necessarily ethical or responsible. Laws lag behind what is possible to do because laws change slowly whereas technology changes rapidly.
2. Virtually any type. The most benefit is for those at the end of the supply chain (retailers, etc.)
3. Answers may vary. Certainly when personal data upon which the prediction relies are collected without consent, as appears with Target, especially for those underage.
4. Answers may vary. It depends on the level of intrusiveness, and that can be very subjective. One might argue that Canadian Tire’s credit card business inherently has purchase information and can analyze to determine risk of missed payments.
5. Answers may vary. There are many. They range from legal to illegal activities (e.g., theft of intellectual property.) When demand is high, will living and/or nonliving medical organs/devices go to the highest bidder? Who is legally responsible for ensuring the quality of the resulting organs and devices? In some cases, 3D printing may be the only mechanism to produce an item. 3D printing is costly. In cases where non-additive manufacturing can do the same at less cost, which will be used?
6. Answers will vary.
7
Learning Objectives (2 of 4)
8
Copyright ©2018 John Wiley & Sons, Inc.
Privacy and Civil Rights
Privacy
Right, or freedom of choice and control to self-determine what information about you is made accessible, to whom, when, and for what use or purpose.
Breach of Privacy
Unauthorized disclosure of personal information.
Privacy Paradox
Phenomenon where social users are concerned about privacy but their behaviors contradict these concerns to an extreme degree.
9
Copyright ©2018 John Wiley & Sons, Inc.
Figure 14.2: Major Data Breaches Reported by 1,040 Adult Americans in 2016 Pew Research Privacy and Security Study
10
Copyright ©2018 John Wiley & Sons, Inc.
Privacy Paradox: Social Recruitment
Social Recruitment
Use of social media to engage, share knowledge among, and recruit and hire employees.
Often involving information the candidate did not want considered (or is illegal) to use in the hiring process.
Typical recruitment includes all job levels:
11
Copyright ©2018 John Wiley & Sons, Inc.
Social Recruitment: Best Practices
Best practice provisions for recruiters:
Have either a third party or a designated person within the company who does not make hiring decisions do the background check.
Use only publicly available information. Do not friend someone to get access to private information.
Do not request username or passwords for social media accounts.
Recruiters are also social stalkers!
12
Copyright ©2018 John Wiley & Sons, Inc.
Civil Rights: Protected Classes
Civil Rights
Rights protected by federal law, such as freedom of speech, press, and assembly; the right to vote, etc.
EEOC (Equal Employment Opportunity Commission)
Enforces federal laws prohibiting discrimination in employment.
Protected classes
Characteristics identified by law that cannot be used in the hiring process.
13
Copyright ©2018 John Wiley & Sons, Inc.
Civil Rights: Discrimination
Discrimination
Biased or prejudicial treatment in recruitment, hiring, or employment based on certain characteristics, such as age, gender, and genetic information, and is illegal in the United States.
Corporate Social Media Discrimination
The use of protected class information to weed out candidates.
Social Media Discrimination
Visiting a person’s social media sites, however, clearly creates the opportunity to view large amounts of information going against these nondiscriminatory practices.
14
Copyright ©2018 John Wiley & Sons, Inc.
Civil Rights: Negligent Hiring
Competing Legal Concerns
Two competing legal concerns are discrimination & negligent hiring.
Negligent Hiring
If a workplace violence incident occurred and the attacker’s public social networking profile contained information that could have predicted that behavior, the employer may be held liable for negligence in not using readily available information during the hiring decision.
15
Copyright ©2018 John Wiley & Sons, Inc.
Reducing Risk of Negligent Hiring
Ask candidates to sign a disclosure statement
Allow self-disclosure
Create a standard process and document it
Consistent well-documented processes
Avoid coercive practices
Eliminate recruiter pressure for applicant disclosure
Training
Emphasize related compliance
16
Copyright ©2018 John Wiley & Sons, Inc.
16
Privacy Paradox, Privacy, and Civil Rights
Describe privacy.
What is the phenomenon where social users are concerned about privacy but their behaviors contradict these concerns?
What is the use of social media to find, screen, and select job candidates?
Rejecting a job candidate because of concerns about the person’s health from information on his or her Facebook page is an example of what?
Age, disability, gender, religion, and race are examples of what?
Why are the legal concepts of discrimination and negligent hiring competing demands on a business?
17
Copyright ©2018 John Wiley & Sons, Inc.
Suggested Answers:
1. Privacy is the right to self-determine what information about you is made accessible, to whom, when, and for what use or purpose. Privacy means we have freedom of choice and control over our personal information, including what we do not want shared with or used by others.
2. The privacy paradox refers to this phenomenon where social users are concerned about privacy but their behaviors contradict these concerns to an extreme degree. Users of social sites often claim that they are concerned about their privacy. At the same time, they disclose their highly personal lives, even content that is incriminating or illegal, in their profiles or posts.
3. Social recruitment refers to use of social media to find, screen, and select job candidates. Often it involves searching information the job candidate did not want considered or that is illegal to use in the hiring process.
4. This is an example of corporate social media discrimination.
5. Protected classes.
6. Two competing legal concerns are discrimination and negligent hiring. These put pressure on prospective employers to find out what they can about a potential employee, to avoid negligence in hiring, yet not cross the line into discrimination.
Discrimination. Most employers have stringent employment policies that prevent their recruiters and hiring managers from learning potentially discriminatory information about candidates. Visiting a person’s social media sites, however, clearly creates the opportunity to view large amounts of information going against these nondiscriminatory practices.
Negligent hiring. Employers must consider the potential risk of a negligent hiring or negligent retention lawsuit related to social networking profile information. It is possible that if a workplace violence incident occurred and the attacker’s public social networking profile contained information that could have predicted that behavior, the employer may be held liable for negligence in not using readily available information during the hiring decision.
17
Learning Objectives (3 of 4)
18
Copyright ©2018 John Wiley & Sons, Inc.
Technology Addictions: Cognitive Overload
Cognitive Overload
Interferes with our ability to focus and be productive.
Potential modern causes:
Mobile apps
Wearable technology
Constant updates
Desire to stay connected
50% of American teens suffer from Fear Of Missing Out (FOMO)
19
Copyright ©2018 John Wiley & Sons, Inc.
Focus Management
Being Able to Focus Counts
An inability to concentrate for longer periods reduces an ability to distinguish important information from trivia.
Some researchers estimate that distraction costs hundreds of billions of dollars a year in lost productivity.
Heavy online users (media high multitaskers) scored poorly on cognitive tests.
20
Copyright ©2018 John Wiley & Sons, Inc.
Focus Recovery
Lost focus can take about 25 minutes recovery time.
Noradrenaline, a chemical that helps us concentrate, is released by focusing.
The best strategy to improve focus: practice doing it.
There is disagreement if multitaskers are working as well as they could, or they could improve their focus.
21
Copyright ©2018 John Wiley & Sons, Inc.
Technology Addictions and Focus Management
What are several potential causes of cognitive overload?
What are the consequences of constant distractions?
When a person is distracted, how long does it take to return to the task at hand and get focused again?
Why are senior managers interested in focus management?
What is the difference between the performance of high and low multitaskers on cognitive tests?
How can multitaskers improve their ability to focus?
22
Copyright ©2018 John Wiley & Sons, Inc.
Suggested Answers:
1. Tweets, texts, e-mail, social media, and annoying electronic static are potential causes.
2. Distractions cause a loss of focus and a loss of productivity.
3. Gloria Mark, a professor of informatics at the University of California, Irvine, says a worker distracted by a Web search that goes rogue or a new text or tweet can take about 25 minutes to return to the task at hand and get focused again (Dumaine, 2014).
4. To improve creativity and productivity. If your mind is free of distraction, your mind is better able to absorb data, interactions, and trends and synthesize the new information with what you already know. As a result, you are more likely to come up with innovative ideas.
5. In contrast to widely held assumptions, subjects who were Media (high) multitaskers scored poorly on cognitive tests.
6. The best strategy to improve focus is to practice doing it.
22
Learning Objectives (4 of 4)
23
Copyright ©2018 John Wiley & Sons, Inc.
23
ICT and Sustainable Development
Being profit-motivated without concern for damage to the environment is unacceptable.
Companies should conduct themselves in an ethical, socially responsible, and environmentally sustainable manner.
The IT industry sector is called the Information and Communications Technology, or ICT, in emissions reports.
24
Copyright ©2018 John Wiley & Sons, Inc.
Figure 14.1: The 4 R’s of environmental sustainability
24
IT and Global Warming
Global warming refers to the upward trend in Global Mean Temperature (GMT).
This is driven by the greenhouse effect, which is the holding of heat within the earth’s atmosphere.
Carbon emissions directly contribute to the greenhouse effect.
25
Copyright ©2018 John Wiley & Sons, Inc.
Figure 14.1: The 4 R’s of environmental sustainability
25
Global Warming IT Sector Actions
McKinsey & Company conclude the following:
IT sector’s own footprint of 2 percent of global emissions could double by 2020 because of increased use of tablets, smartphones, apps, and services.
IT sector must continue to reduce emissions from data centers, telecom networks, and the manufacture and use of its products.
IT has the unique ability to monitor and maximize energy efficiency both within and outside of its own industry sector to cut CO2 emissions by up to 5 times this amount.
26
Copyright ©2018 John Wiley & Sons, Inc.
Sustainability Through Climate Change Mitigation
Every IT user, enterprise, and nation plays a role in climate change mitigation.
Wired and mobile networks enable limitless data creation and consumption
Energy used to power data centers, cell towers, base stations, and recharge devices is damaging the environment and depleting natural resources.
Innovative sustainability initiatives hold the key to curbing these emissions and carbon footprint, thereby reducing environmental impact.
27
Copyright ©2018 John Wiley & Sons, Inc.
Technology to Transform Business and Society
People hold the power to shape and apply technology to create positive change, improve lives and transform business and society.
Accenture’s Technology Vision 2017 is an analysis of key IT trends that are expected to disrupt business and society over the next three years.
According to Vision 2017, taking a people first approach by empowering people with more human technology will allow organizations to improve performance by redefining their relationship with customers and employees from provider to partner.
28
Copyright ©2018 John Wiley & Sons, Inc.
Top Five Disruptive Technologies 2015-2017
Vision 2015 | Vision 2016 | Vision 2017 |
Internet of Me | Intelligent Automation | Artificial Intelligence as the new User Interface |
Outcome Economy | Liquid Workforce | Design for Humans |
Platform Evoluation | Platform Economy | Ecosystems as Macrocosms |
Intelligent Enterprise | Predictable Disruption | Workforce Marketplace |
Workplace Reimagined | Digital Trust | The Uncharted |
29
Copyright ©2018 John Wiley & Sons, Inc.
Top Five Disruptive Technologies (1 of 2)
AI is the new UI
AI is becoming the new user interface (UI), underpinning the way we transact and interact with systems.
AI will revolutionize the way businesses gain information from and interact with customers.
Design for Humans
Technology design decisions are being made by humans, for humans.
Organizations need to understand not only where people are today, but also where they want to be.
30
Copyright ©2018 John Wiley & Sons, Inc.
Top Five Disruptive Technologies (2 of 2)
Ecosystems as Macrocosms.
Digital ecosystems are transforming the way organizations deliver value.
Workforce Marketplace.
Companies are dissolving traditional hierarchies and replacing them with talent marketplaces of independent freelance workers.
The Uncharted.
Businesses must delve into uncharted territory, seizing opportunities to establish rules and standards for entirely new industries.
31
Copyright ©2018 John Wiley & Sons, Inc.
The Next Wave of Disruption
Next…More Disruptive Disruption
High-performing business leaders now accept that their organizations’ future success is tied to their ability to keep pace with technology.
Technology is more important than ever to their business success.
Biggest IT innovations will not be in the technology tools themselves, but in how they are designed with people in mind.
A people first approach is the key to any organization’s digital success.
32
Copyright ©2018 John Wiley & Sons, Inc.
ICT and Sustainable Development
Why do some experts warn that carbon emission reductions between 50 percent and 85 percent are necessary by 2050?
What contributes to the rise of global mean temperature?
What is the greenhouse effect?
How does the use of mobile devices contribute to the level of greenhouse gases?
What is ICT’s role in global warming?
Why is global warming hotly debated?
What is the role of IT in sustainable development?
Why is it important for organizations to take a people first approach to IT?
33
Copyright ©2018 John Wiley & Sons, Inc.
Suggested Answers:
Carbon emission reductions between 50 percent and 85 percent are necessary by 2050 to prevent the global temperature from rising too much too fast because of the greenhouse effect.
Increases in CO2 resulting from human activities that generate carbon emissions have thrown the earth’s natural carbon cycle off balance, increasing global temperatures and changing the planet’s climate. Climatologists estimated that countries must keep the global mean temperature (GMT) from rising by more than 2°C (3.6°F) above the preindustrial GMT in order to avoid profound damage to life on the earth. Damage includes water and food scarcity, rising sea levels, and greater incidence and severity of disease.
The greenhouse effect is the holding of heat within the earth’s atmosphere. CO2 and other greenhouse gases (GHGs) trap the sun’s heat with-in the earth’s atmosphere, warming it and keeping it at habitable temperatures.
The surge in energy used to power data centers, cell towers, base stations, and recharge devices, all of which support mobile devices, is damaging the environment and depleting natural resources.
ICT plays a key role in reducing global warming. Transforming the way people and businesses use IT could reduce annual human-generated global emissions by 15 percent by 2020 and deliver energy efficiency savings to global businesses of over 500 billion euros, or $800 billion U.S. And using social media, for example, to inform consumers of the grams (g) of carbon emissions associated with the products they buy could change buyer behavior and ultimately have a positive eco-effect.
Many scientists and experts are extremely alarmed by global warming and climate change, but other experts outright deny that they are occurring.
Every IT user, enterprise, and nation plays a role in climate change mitigation. Climate change mitigation is any action to limit the magnitude of long-term cli-mate change. Examples of mitigation include switching to low-carbon renewable energy sources and reducing the amount of energy consumed by power stations by increasing their efficiency.
According to Accenture’s Vision 2017, taking a people first approach by empowering people with more human technology will allow organizations to improve performance by redefining their relationship with customers and employees from provider to partner. This will require organizations to change the way they develop their business models and provide technology that support them to promote social responsibility.
33
Copyright
Copyright © 2018 John Wiley & Sons, Inc.
All rights reserved. Reproduction or translation of this work beyond that permitted in Section 117 of the 1976 United States Act without the express written permission of the copyright owner is unlawful. Request for further information should be addressed to the Permissions Department, John Wiley & Sons, Inc. The purchaser may make back-up copies for his/her own use only and not for distribution or resale. The Publisher assumes no responsibility for errors, omissions, or damages, caused by the use of these programs or from the use of the information contained herein.
34
Copyright ©2018 John Wiley & Sons, Inc.
34
42 journal of law, medicine & ethics The Journal of Law, Medicine & Ethics, 45 S2 (2017): 42-45. © 2017 The Author(s)
DOI: 10.1177/1073110517750620
Disclose Data Publicly, without Restriction Peter Doshi and Tom Jefferson
T he FDA holds more clinical trial data than any other body on the planet, more than any other regulator and more than any single phar-
maceutical company. And at present the FDA sits on those data, treating such data as commercial confi- dential information that it is not at liberty to disclose.1 This prevents systematic reviewers, guideline commit- tees, other public health bodies, and other third par- ties from independent evaluation of the clinical evi- dence the FDA relied upon to determine the benefits and harms of medicines. In other words, ‘no sunlight please — we know what is best for you.’ This must end, and the Blueprint rightly calls on FDA to “disclose data from scientific studies to enhance understanding of medical products.”2
Since concerns about publication bias emerged decades ago,3 systematic reviewers have paid par- ticular attention to addressing the vexing problem of unpublished trials.4 But over the past decade, addi- tional grave concerns have emerged over the trustwor- thiness of even those clinical trials that are published.5 The inescapable conclusion from this growing body of research is that what we see, even in the most highly regarded peer-reviewed journals, cannot be trusted at face value. We know this only for selected cases, such as with the drugs rofecoxib, celecoxib, paroxetine, gaba- pentin, and oseltamivir, and often only because litiga- tion or public pressure helped force company reports
of clinical trials into the public domain: “clinical study reports.” These are unabridged reports of clinical stud- ies created by industry following a set format primar- ily for regulatory review.6 Publications by contrast may omit or minimize mention of serious adverse events that occurred during the trial. They may misreport the trial’s primary endpoint, the duration of the trial, or other aspects of the protocol. They may fail to disclose limitations in reliability of the collected data, or lapses in the conduct of the trial. They may describe a sub- stance as placebo when it’s not inert or they may frag- ment a dataset in a number of reports — some visible, some not.
Those that believe sound clinical decisions can be made based on the evidence available in journal pub- lications alone are, whether they realize it or not, bet- ting that the problems that have been discovered thus far are “in the past,” “bad apples,” “all fixed” and that under reporting and mis-reporting of trials no lon- ger occurs. This is a dangerous and costly bet to make on human welfare. It also is an unnecessary bet, as the raw data from clinical trials already exists. All it takes is disclosure of the data to enable independent scrutiny.
We therefore believe that public disclosure of the clinical trial data in FDA’s possession is not simply an “opportunity to enhance transparency at FDA,” to quote the Blueprint, but is rather an ethical impera- tive to ensure evidence-based medicine is truly based on evidence and not a selected summary of it.
Public Release or Not? Now comes the issue of just how to disclose clinical study reports (Recommendation 16). In this regard, we believe the Blueprint’s suggestions need clarify- ing. The Blueprint advocates both “harmonizing FDA
Peter Doshi, Ph.D., is an assistant professor at the Univer- sity of Maryland School of Pharmacy and associate editor at The BMJ. His research focuses on policies related to drug safety and effectiveness evaluation in the context of regulation and evidence synthesis. Tom Jefferson, M.D., is a senior associate tutor at the University of Oxford. He is a physi- cian, researcher, and campaigner for access to randomized controlled trial data.
Doshi and Jefferson
blueprint for transparency at the u.s. food and drug administration • winter 2017 43 The Journal of Law, Medicine & Ethics, 45 S2 (2017): 42-45. © 2017 The Author(s)
policy with that of the European Medicines Agency” (which has been releasing clinical study reports since late 2010 through a freedom of information-like mechanism) and using a repository that “employ[s] safeguards” prior to sharing individual participant data, including verifying that “the research proposed would advance science or improve public health and healthcare, check institutional status, and create legally enforceable agreements that ensure applicants will not compromise patient identity.” These sugges- tions must be considered in light of the fact that the EMA does not screen requests, a fundamental and important contrast to the many data access (not data sharing) systems now available such as the joint phar- maceutical company sponsored ClinicalStudyDataRe- quest.com. While we share the aims of groups mak- ing data accessible — ensuring responsible research conduct, protecting the rights of patients, and good data stewardship — we disagree that gatekeeping is
the appropriate means to that end and we have sev- eral years’ experience to back our views. We place more faith in the structures of open science to police misconduct and reward good behavior, and believe a regulator’s duty in a political democracy is to ensure the basis for all citizens to make informed decisions about medicines based on the data in its possession. Thus we challenge the Blueprint’s proposed scope for limiting FDA’s transparency involvement in this area to only those trials “where sponsors have not already made their data available by other means.” Research- ers who have reused data from ClinicalStudyDataRe- quest.com have called it like doing research “through a periscope”;7 this is unsatisfactory.
The gatekeeper-free approach EMA adopted is thus the correct one, but it suffers from an inability to keep pace with the volume of requests. In 2013, the agency received less than 300 requests through its “reactive” access to documents policy 0043. In 2016, it received more than 800, and by the end of 2017, these are likely to top 1000.8 Overload has set in despite an increase in staff to 12 full-time employees that communicate with requestors and sponsors of affected products and oversee the redaction process to protect the pri-
vacy and integrity of individuals that may be named — or discoverable — in documents.9 A second policy the EMA launched last year (policy 0070) proactively publishes clinical study reports to the web.10
Individual Participant Data While clinical study reports are paper (PDF) docu- ments that routinely run hundreds to thousands of pages in length, electronic patient level datasets raise heightened concerns about the risk of re-identification of patients. There is no doubt that this risk increases with access to electronic patient level datasets. Even if patients cannot be re-identified using a single trial dataset, risk of re-identification rises with linkage to other datasets. As EMA does not routinely request participant level data, whereas the FDA does, FDA must forge new ground in setting standards of how to effectively balance the imperative to share these data while also working to reduce the risk of re-identifi-
cation of participants. Because the probability of re- identification, even after anonymization, is generally not zero, the pros and cons of various control mecha- nisms (such as contracts or more restrictive terms on access) will need to be considered. Public release of data, without restriction, may pose a low risk of re- identification for certain clinical trial datasets whereas for other datasets, more restrictive measures may be necessary to meet the public’s perception of “accept- able risk.” This is a field that data scientists have been working in for some time, and the FDA should elicit their contribution in establishing standards.11
Correcting Misleading Information In the few years that have now passed since the launch of EMA Policy 0043, YODA, and ClinicalSudyDa- taRequest.com, one can ask whether these systems for data sharing and data access-without-sharing are being sufficiently used with outcomes that offset the cost and effort involved in making them exist. We think the answer is a definite ‘yes’ and as evidence offer the case of oseltamivir. Since 2004, the US gov- ernment in the pandemic influenza preparedness and response plan offered a scientific rationale to justify
We therefore believe that public disclosure of the clinical trial data in FDA’s possession is not simply an “opportunity to enhance transparency at FDA,” to
quote the Blueprint, but is rather an ethical imperative to ensure evidence- based medicine is truly based on evidence and not a selected summary of it.
44 journal of law, medicine & ethics
J L M E S U P P L E M E N T
The Journal of Law, Medicine & Ethics, 45 S2 (2017): 42-45. © 2017 The Author(s)
its stockpiling of influenza: the drug was to cut rates of serious complications of influenza and hospitaliza- tions in half.12 This rationale was based on a six-page journal article; four of the six authors were employees of the manufacturer, and one was a paid consultant.13
Had our Cochrane review team reviewed clinical study reports in the year 2000 rather than 2011, we could have shown that oseltamivir was not proven to reduce these risks years before governments stock- piled billions of dollars worth of the drug.14 Just one oseltamivir-like experience every decade surely offers the opportunity to correct misleading information and save orders of magnitude more money than it costs to ensure timely public access to clinical trial data in FDA’s possession.
At the same time, FDA is in a position to do more than just release clinical trial data. It can also help “correct misleading information in the market” itself, as the Blueprint advocates (Recommendation 15). In numerous cases, important discrepancies between the published reports of clinical trials and the data sub- mitted to FDA are known to FDA scientists. Quickly correcting misleading publications of trials soon after drug approval could prevent many adverse down- stream effects, and FDA scientists are well positioned to do this.15 If all it takes to achieve this aim are extra resources, then we think these would be well spent, considering the threat to life and tax payers’ wallets that the cited cases entailed. We therefore believe that in addition to the Blueprint’s prudent suggestions to establish a standard for correcting misleading infor- mation by public servants, the Agency should also actively encourage its scientists to help ensure the accuracy of the medical literature by engaging directly, without need for sign-off from one’s superiors. Less bureaucracy and secrecy and more sunlight is needed if regulation is to regain its lost reputation and fulfill its public health mission.
Note Dr. Doshi and Dr. Jefferson are co-recipients of a grant from the Laura and John Arnold Foundation to establish a RIAT Sup- port Center and in receipt of a Cochrane Methods Innovations Fund grant to develop guidance on the use of regulatory data in Cochrane reviews. Dr. Doshi and Dr. Jefferson were also co-recip- ients of a UK National Institute for Health Research grant (HTA – 10/80/01 Update and amalgamation of two Cochrane Reviews: neuraminidase inhibitors for preventing and treating influenza in healthy adults and children—https://allaplusessays.com/order hta/108001).
In addition, Dr. Jefferson receives royalties from his books pub- lished by Blackwells and Il Pensiero Scientifico Editore, Rome. Dr. Jefferson is occasionally interviewed by market research compa- nies for anonymous interviews about Phase 1 or 2 pharmaceutical products. In 2011-2013, Dr. Jefferson acted as an expert witness in a litigation case related to oseltamivir phosphate; Tamiflu [Roche] and in a labour case on influenza vaccines in healthcare workers in Canada. In 1997-99 Dr. Jefferson acted as a consultant for Roche,
in 2001-2 for GSK, and in 2003 for Sanofi-Synthelabo for pleco- naril (an anti-rhinoviral, which did not get approval from the Food and Drug Administration). Dr. Jefferson was a consultant for IMS Health in 2013, and in 2014 was retained as a scientific adviser to a legal team acting on the drug Tamiflu (oseltamivir, Roche). In 2014-15, Dr. Jefferson was a member of two advisory boards for Boerhinger. Dr. Jefferson has a potential financial conflict of interest in the investigation of the drug oseltamivir. Dr. Jefferson was a member of an Independent Data Monitoring Committee for a Sanofi Pasteur clinical trial. Dr. Jefferson is a co-signatory of the Nordic Cochrane Centre Complaint to the European Medicines Agency (EMA) over maladministration at the EMA in relation to the investigation of alleged harms of HPV vaccines and consequent complaints to the European Ombudsman.
Dr. Doshi received €1500 from the European Respiratory Soci- ety in support of his travel to the society’s September 2012 annual congress in Vienna, where he gave an invited talk on oseltami- vir. Dr. Doshi gratefully acknowledges the American Association of Colleges of Pharmacy for its funding support ($11,000) for a study to analyze written medical information regarding the pos- sible harms of statins. AACP had no involvement in the design and conduct of the study; collection, management, analysis, and interpretation of the data; and preparation, review, or approval of this manuscript. Dr. Doshi is also an associate editor of The BMJ (https://allaplusessays.com/order) and an unpaid member of the IMEDS steering committee at the Rea- gan-Udall Foundation for the FDA, which focuses on drug safety research.
References 1. Food and Drug Administration, “Availability of Masked
and De-identified Non-Summary Safety and Efficacy Data; Request for Comments (Docket No. FDA-2013-N-0271),” Federal Register 3 (2013); 33421-3343.
2. FDA Transparency Working Group, Blueprint for Transpar- ency at the U.S. Food and Drug Administration,” Journal of Law, Medicine & Ethics 45, no. 4, Suppl. (2017): 7-23.
3. R. J. Simes, “Publication Bias: The Case for an International Registry of Clinical Trials,” Journal of Clinical Oncology 4, no. 10 (1986): 1529-1541; K. Dickersin and Y. I. Min, “Publication Bias: The Problem That Won’t Go Away,” Annals of the New York Academy of Science 703 (1993): 135-146, at discussion 146-148. The Cochrane Collaboration, ed., Publication Bias in Clinical Trials Due to Statistical Significance or Direction of Trial Results,” in Cochrane Database of Systematic Reviews (Chichester, UK: John Wiley & Sons, Ltd., 1996): at 147 (2nd International Conference Scientific Basis of Health Services & 5th Annual Cochrane Colloquium, October 5-8, 1997, Amsterdam, The Netherlands, vol. 46).
4. J. P. T. Higgins, S. Green, eds., Cochrane Handbook for Sys- tematic Reviews of Interventions Version 5.1.0 (updated March 2011), The Cochrane Collaboration, 2011.
5. A.-W. Chan, A. Hróbjartsson, M. T. Haahr, P. C. Gøtzsche, and D. G. Altman, “Empirical Evidence for Selective Report- ing of Outcomes in Randomized Trials: Comparison of Protocols to Published Articles,” JAMA 291, no. 20 (2004): 2457-2465; D. W. Coyne, “The Health-Related Quality of Life Was Not Improved by Targeting Higher Hemoglobin in the Normal Hematocrit Trial,” Kidney International 82, no. 2 (2012): 235-241; K. Dwan, C. Gamble, P. R. Williamson, and J. J. Kirkham, “Reporting Bias Group. Systematic Review of the Empirical Evidence of Study Publication Bias and Out- come Reporting Bias: An Updated Review,” PLoS One 8, no. 7 (2013): e66844; D. Eyding, M. Lelgemann, U. Grouven, M. Härter, M. Kromp, T. Kaiser, et al., “Reboxetine for Acute Treatment of Major Depression: Systematic Review and Meta- analysis of Published and Unpublished Placebo and Selective Serotonin Reuptake Inhibitor Controlled Trials,” BMJ 341 (2010): c4737; R. Fu, S. Selph, M. McDonagh, K. Peterson, A. Tiwari, R. Chou, et al., “Effectiveness and Harms of Recombi- nant Human Bone Morphogenetic Protein-2 in Spine Fusion:
Doshi and Jefferson
blueprint for transparency at the u.s. food and drug administration • winter 2017 45 The Journal of Law, Medicine & Ethics, 45 S2 (2017): 42-45. © 2017 The Author(s)
A Systematic Review and Meta-analysis,” Annals of Internal Medicine 158, no. 12 (2013): 890-902; S. Golder, Y. K. Loke, K. Wright, and G. Norman, “Reporting of Adverse Events in Published and Unpublished Studies of Health Care Interven- tions: A Systematic Review,” PLoS Medicine 13, no. 9 (2016): e1002127; S. Hughes, D. Cohen, and R. Jaggi, “Differences in Reporting Serious Adverse Events in Industry Sponsored Clinical Trial Registries and Journal Articles on Antidepres- sant and Antipsychotic Drugs: A Cross-Sectional Study,” BMJ Open 4, no. 7 (2014): e005535; M. Huić, M. Marušić, and A. Marušić, “Completeness and Changes in Registered Data and Reporting Bias of Randomized Controlled Trials in ICMJE Journals after Trial Registration Policy,” PLoS One 6, no. 9 (2011): e25258; T. Jefferson, P. Doshi, M. Thompson, and C. Heneghan, “Ensuring Safe and Effective Drugs: Who Can Do What It Takes?” BMJ 342 (2011): c7258; T. Jefferson, M. A. Jones, P. Doshi, C. B. Del Mar, C. J. Heneghan, R. Hama, et al., “Neuraminidase Inhibitors for Preventing and Treating Influenza in Healthy Adults and Children,” Cochrane Data- base of Systematic Reviews 1 (2012): CD008965; M. Köhler, S. Haag, K. Biester, A. C. Brockhaus, N. McGauran, U. Grou- ven, et al., “Information on New Drugs at Market Entry: Retrospective Analysis of Health Technology Assessment Reports Versus Regulatory Reports, Journal Publications, and Registry Reports,” BMJ 350 (2015): h796; J. Le Noury, J. M. Nardo, D. Healy, J. Jureidini, M. Raven, C. Tufanaru, et al., “Restoring Study 329: Efficacy and Harms of Paroxetine and Imipramine in Treatment of Major Depression in Ado- lescence,” BMJ 351 (2015): h4320; A. Lundh, J. Lexchin, B. Mintzes, J. B. Schroll, and L. Bero, “Industry Sponsorship and Research Outcome,” Cochrane Database of Systematic Reviews 2 (2017): MR000033; E. Maund, B. Tendal, A. Hróbjartsson, K. J. Jørgensen, A. Lundh, J. Schroll, et al., “Benefits and Harms in Clinical Trials of Duloxetine for Treatment of Major Depressive Disorder: Comparison of Clinical Study Reports, Trial Registries, and Publications,” BMJ 348 (2014): g3510; N. McGauran, B. Wieseler, J. Kreis, Y.-B. Schüler, H. Kölsch, “Kaiser T. Reporting Bias in Medical Research: A Narrative Review,” Trials 11 (2010): 37; M. A. Rodgers, J. V. E. Brown, M. K. Heirs, J. P. T. Higgins, R. J. Mannion, M. C. Simmonds, et al., “Reporting of Industry Funded Study Outcome Data: Comparison of Confidential and Published Data on the Safety and Effectiveness of rhBMP-2 for Spinal Fusion,” BMJ 346 (2013): f3981; P. Saini, Y. K. Loke, C. Gamble, D. G. Altman, P. R. Williamson, and J. J. Kirkham, “Selective Reporting Bias of Harm Outcomes within Studies: Findings from a Cohort of Systematic Reviews,” BMJ 349 (2014): g6501; F. Song, S. Parekh, L. Hooper, Y.K. Loke, J. Ryder, A. J. Sutton, et al., “Dissemination and Publication of Research Findings: An Updated Review of Related Biases,” Health Technology Assess- ment 14, no. 8 (2010): iii, ix-xi, 1-193; E. H. Turner, A. M. Matthews, E. Linardatos, R. A. Tell, and R. Rosenthal, “Selec- tive Publication of Antidepressant Trials and Its Influence on Apparent Efficacy,” New England Journal of Medicine 358, no. 3 (2008): 252-260; S. S. Vedula, T. Li, and K. Dickersin, “Differences in Reporting of Analyses in Internal Company Documents Versus Published Trial Reports: Comparisons in Industry-Sponsored Trials in Off-Label Uses of Gabapentin,”
PLoS Medicine 10, no. 1 (2013): e1001378; S. S. Vedula, L. Bero, R. W. Scherer, and K. Dickersin, “Outcome Reporting in Industry-Sponsored Trials of Gabapentin for Off-Label Use,” New England Journal of Medicine 361, no. 20 (2009): 1963- 1971; B. Wieseler, M. F. Kerekes, V. Vervoelgyi, N. McGauran, and T. Kaiser, “Impact of Document Type on Reporting Qual- ity of Clinical Drug Trials: A Comparison of Registry Reports, Clinical Study Reports, and Journal Publications,” BMJ 344 (2012): d8141; B. Wieseler, N. Wolfram, N. McGauran, M. F. Kerekes, V. Vervölgyi, P. Kohlepp, et al., “Completeness of Reporting of Patient-Relevant Clinical Trial Outcomes: Comparison of Unpublished Clinical Study Reports with Publicly Available Data,” PLoS Medicine 10, no. 10 (2013): e1001526; V. Yank, D. Rennie, and L. A. Bero, “Financial Ties and Concordance between Results and Conclusions in Meta- analyses: Retrospective Cohort Study,” BMJ 335, no. 7631 (2007): 1202-1205; S. S. Vedula, P. S. Goldman, I. J. Rona, T. M. Greene, and K. Dickersin, “Implementation of a Publica- tion Strategy in the Context of Reporting Biase: A Case Study Based on New Documents from Neurontin Litigation,” Trials 13 (2012):136.
6. P. Doshi and T. Jefferson, “Clinical Study Reports of Ran- domised Controlled Trials: An Exploratory Review of Pre- viously Confidential Industry Reports,” BMJ Open 3, no. 2 (2013), available at <https://allaplusessays.com/order pen-2012-002496> (last visited November 8, 2017).
7. See Le Noury, et al., supra note 5. 8. Personal communication with Anne-Sophie Henry-Eude,
June 27 and July 24, 2017. 9. P. Doshi and T. Jefferson, “Open Data 5 Years On: A Case
Series of 12 Freedom of Information Requests for Regulatory Data to the European Medicines Agency,” Trials 17 (2016): 78.
10. European Medicines Agency, “Online Access to Clinical Data for Medicinal Products for Human Use,” European Medicines Agency Clinical Data, available at <https://allaplusessays.com/order. https://allaplusessays.com/order; (last visited November 8, 2017).
11. K. El Emam, S. Rodgers, and B. Malin, “Anonymising and Sharing Individual Patient Data,” BMJ 350 (2015): h1139.
12. HHS, DRAFT Pandemic Influenza Preparedness and Response Plan (2004), cited July 5, 2010, available at <https://allaplusessays.com/order. https://allaplusessays.com/order; (last visited November 8, 2017); HHS, HHS Pandemic Influenza Plan (2005), cited January 23, 2013, available at <http:// https://allaplusessays.com/order pandemicinfluenzaplan.pdf> (last visited November 8, 2017).
13. L. Kaiser, C. Wat, T. Mills, P. Mahoney, P. Ward, and F. Hayden, “Impact of Oseltamivir Treatment on Influenza- Related Lower Respiratory Tract Complications and Hospi- talizations,” Archives of Internal Medicine 163, no. 14 (2003): 1667-1672.
14. See Jefferson, et al., supra note 5; T. Jefferson and P. Doshi, “Multisystem Failure: The Story of Anti-influenza Drugs,” BMJ 348 (2014): g2263.
15. P. Doshi and F. Godlee, “The Wider Role of Regulatory Scien- tists,” BMJ 357 (2017): j1991.
Copyright of Journal of Law, Medicine & Ethics is the property of Sage Publications Inc. and its content may not be copied or emailed to multiple sites or posted to a listserv without the copyright holder’s express written permission. However, users may print, download, or email articles for individual use.
European Journal of Public Health, Vol. 29, Supplement 3, 18–22
� The Author(s) 2019. Published by Oxford University Press on behalf of the European Public Health Association. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (https://allaplusessays.com/order
4.0/), which permits unrestricted reuse, distribution, and reproduction in any medium, provided the original work is properly cited.
doi:10.1093/eurpub/ckz167
……………………………………………………………………………………………
Ethical aspects of digital health from a justice point of view
Caroline Brall1, Peter Schröder-Bäck2,3, Els Maeckelberghe3,4
1 Department of Health and Technology, Health Ethics and Policy Lab, ETH Zurich, Zurich, Switzerland 2 Department of International Health, Care and Public Health Research Institute (CAPHRI), Maastricht University,
Maastricht, The Netherlands 3 Section Ethics in Public Health, European Public Health Association (EUPHA) 4 Institute for Medical Education, University of Groningen, University Medical Center Groningen, Groningen, The
Netherlands
Correspondence: Caroline Brall, Department of Health and Technology, Health Ethics and Policy Lab, ETH Zurich, Hottingerstrasse 10, 8092 Zürich, Switzerland, Tel: +41 632 42 69, e-mail:
Digital health is transforming healthcare systems worldwide. It promises benefits for population health but might also lead to health inequities. From an ethical perspective, it is hence much needed to adopt a fair approach. This article aims at outlining chances and challenges from an ethical perspective, focusing especially on the dimension of justice—a value, which has been described as the core value for public health. Analysed through the lenses of a standard approach for health justice—Norman Daniels’ account of just health and accountability for reasonable- ness—most recent and relevant literature was reviewed and challenges from a justice point of view were identified. Among them are challenges with regard to digital illiteracy, resulting inequities in access to healthcare, truthful information sharing to end users demanding fully informed consent, dignity and fairness in storage, access, sharing and ownership of data. All stakeholders involved bear responsibilities to shape digital health in an ethical and fair way. When all stakeholders, especially digital health providers and regulators, ensure that digital health interventions are designed and set up in an ethical and fair way and foster health equity for all population groups, there is a chance for this transformation resulting in a fair approach to digital health.
……………………………………………………………………………………………
Introduction
D igital technology is already part of our daily lives. We use smart- phones to navigate our routes and order our purchases. Also in
the field of health, the digital dimension is ever increasing, and in the last few years, digital health initiatives received much interest and increasing investments from public and private sources.
The purposes and utilizations of digital health are to monitor, prevent, screen, diagnose and treat health-related issues on the healthcare and public health level. Digital health methods are in- creasingly embraced to strengthen health systems worldwide, as for instance put forward in the recently published recommendations on digital interventions for health system strengthening.
1 Kickbusch
terms this ongoing digital transformation within health and medical care as ‘health 4.0’,2 highlighting the importance of adjusting existent practice and governance structures to meet the challenges implicated by digital health, as for instance how data should be stored and accessed by whom, who can benefit from digital health and who is at risk of being excluded, and which types of informed consent should be employed. In view of this change of cultural environment, it is important to carefully consider the chances and challenges from an ethical perspective in order to establish and frame a sound and fair approach for digital health. Yet, publications that sketch the ethics of digital health are still scarce, given this is an innovative field of public health. Thus, in the following we will outline these chances and challenges from an ethical perspective, focusing especially on the dimension of justice— a value, which has been described as the core value for public health.3 Justice is closely linked to and addresses ‘questions of responsibilities and obligations’4 when it comes to balancing benefits and risks of population health interventions. More concretely, we define justice in line with Norman Daniels’ account of just health and accountability for reasonableness,5,6 which has
been considered the ‘most well-known rationale’7 for health and fairness. The argument he started developing more than 30 years ago has been considered a ‘seminal’ and ‘classic’8 work. It is considered a standard approach and among the ‘key narratives (and vocabulary)’9 in public health ethics, in research and teaching.
Daniels argues that justice describes the social obligations to promote and restore health as a means to achieve individual opportunities and exercise individual autonomy.
10 He specifies that
everyone should have fair access to public health and healthcare to have fair equality of opportunities in society, resulting in health equity.5 Daniels also states that fair processes are needed to ensure legitimacy and fairness. His concept of accountability for reasonable- ness declares that policies should be made in a transparent way, based on reasonable arguments and with the option of being revised.6 Such a public health justice approach towards the implications—the chances and challenges—of digital health can uncover what is ethically at stake, where responsibilities lie for those involved, and can guide and justify resulting policy choices.
Thus, with this understanding of a public health justice approach, we discuss the ethical chances and challenges unfolding in digital health. We base our analytic overview of these issues on a narrative review in order to obtain a broad perspective on recent and relevant literature on digital (public) health. We point out what ethical guidance is needed and for whom, and finally we address existing policy and practice initiatives to foster ethical digital health.
Ethical chances and challenges of digital health
The sphere in which ethical issues in digital health proliferate is multidimensional. First, it is dependent on the distinct phases of digital health usage, i.e. before accessing digital health technologies, during as well as after usage. Second, different stakeholders from the
D ow
nloaded from https://allaplusessays.com/order
ic.oup.com /eurpub/article-abstract/29/S
upplem ent_3/18/5628045 by 81695661, O
U P
on 29 N ovem
ber 2019
medical and non-medical, public and private arena are involved, setting new challenges with regard to governance structures, emphasizing the need for rethinking responsibilities. Third, challenges are on the one hand tied to technical issues, such as how to protect data (e.g. secure storage, firewalls, etc.). On the other hand, they are tied to aspects related to general governance (as for instance accountability and transparency). Besides these challenges, which can even result in physical, psychological or social harms to individuals,11 there are also chances for using digital health to establish fairer health systems. We will address these challenges and chances (mentioned in the literature), following their occurrence during the distinct phases of digital health usage.
Before utilization of digital health
Access
The first phase of digital health usage is before users actually access such technologies and applications, where ethical considerations inherently arise in line with aspects related to access. They specifically centre around logistic and resource-related aspects, including equitable access to digital health services in terms of affordability of and access to technological equipment.12 Here also the availability of such services plays a role: for underserved communities and populations, for instance people suffering from rare diseases, elderly or homeless, digital health services might not be offered or even developed. It remains crucial to safeguard fairness and equity in access already when developing such digital health approaches.13 Integrating such ethical considerations in the planning phase is—mostly in the field of artificial intelligence for health—referred to as ‘ethics by design’.14
The developers of digital health interventions hence have a moral re- sponsibility to design such technologies in a way that take into account ethical forethoughts and aspects, for instance when designing algorithms for artificial intelligence, that represent all parts of the population and leave no ground for bias and resulting discrimination.
In general, the employment of digital health technologies can give rise to inequalities in access which go beyond affordability of technology, but depend on the individual’s technological ability and capacity to engage with e-health tools. When certain populations are excluded to use such technologies, for instance due to age-related socialization and sometimes corresponding digital illiteracy, the danger of an unjust health system 4.0 is prevalent. However, digital health technologies also offer chances for inclusion of population groups which experience barriers to access conventional healthcare provision, for instance due to geographical distance to reach medical settings in general or specified healthcare professionals, or due to physical inability to travel to the medical sites on a regular basis. Here, digital health can be seen as an enabler for fair and accessible health provision by extending healthcare coverage to areas and persons with previously limited access to health services or research.15 This, again, can save overall healthcare costs through efficiency improvements and provides a more demand-oriented provision of healthcare services. Also, possible increases in coverage contribute to improve global health and can be evaluated as a measure to improve equality of opportunity.
Truthful information, empowerment and informed consent
In order to make people capable to actually use the opportunities offered to them if they wish, truthful information about the benefits and risks of engaging in digital health methods has to be provided to the individual users. Hence, users should be motivated and empowered (in an informational as well as technical sense) to engage in digital health technology. For this, open communication, technical training and education should be offered. It is important that their participation is voluntary and is not undermined by any sort of incentive, be it of financial nature or prioritizing those that
use digital health technologies when they seek medical care in non- digital, conventional healthcare settings. And not using these opportunities may not be sanctioned or result in a lack of access to health services. Moreover, ‘users’ should be aware that their data are being collected for health-related purposes, for instance in the case of location tackers, which can give information about an indi- vidual’s health (e.g. when frequent visits to hospitals or other healthcare sites are documented). Yet, for public health purposes, aggregate information, e.g. from social media posts about flu symptoms, could give hints of the spread of diseases—techniques being referred to as digital epidemiology and epidemic forecasting. In general, however, there is the danger of digital health establishing a surveillance society. This and other contested uses should be prohibited by law and prevented in practice.
As regards truthful information, informed consent also plays a major role. Whereas traditional models of informed consent aimed to inform patients and research subjects and primarily focused to avoid harm to the individual in the course of the procedure—thus having a limited time span—new models of informed consent for digital health have to be considered. Those new models should not only take into account intended and unintended uses of data provided by aware users, but should also consider the larger time dimension, when data are stored (and po- tentially used) for a considerable amount of time. Additionally, certain types of digital health, e.g. when genetic data are involved, extend the knowledge gained about an individual to his or her gen- etically related family members. Revision of existing and traditional models of informed consent, such as opt-out, waiver, no consent and open or categorical consent, is needed for meeting the challenges posed and adjusting consent mechanisms accordingly to ensure and promote autonomy for everyone in line with fair data uses.16
During utilization of digital health
Fairness in storage, access, sharing and ownership
During the phase of actual utilization of digital health technologies and also thereafter, challenges of ethical concern arise with regard to storage, access, sharing and ownership of data as well as return of results. Apart from touching relevant ethical considerations in line with security, privacy, confidentiality, discrimination, unintended uses of data and right to know or not to know results about sometimes incidental findings, these aspects also have implications for a fair use of digital health.
Initially, data have to be stored in such a way that no unauthorized access through hacking or other fraud is facilitated that allows for dis- crimination and stigmatization, when confidential information is falling into the wrong hands. Also, when data collectors grant access to other stakeholders, various considerations with view to fairness emerge: what is the purpose of accessing and using the data? What is the benefit for providing and accessing stakeholders? Do they pursue commercial objectives or benefit for the public? Are users aware of the uses of their data? And are these data only used for intended purposes or also unintended uses? These questions are relevant to address as they not only touch the ethical issues of autonomy, informed choice and right to privacy, but are also closely interlinked with justifiable uses of data basing on the individual’s right to determine for what his or her personal information is used for.
A fair use of data should furthermore be guaranteed as regards ownership of data, circling around questions to be answered as regards to who owns the data and who is custodian of data: data collectors, users themselves, governments, public organizations, etc. Although no universal regulation has been established yet,17 it should be guaranteed that individuals who donate their data are not exploited. It also remains to be regulated who should be eligible to financially benefit from donated data under what conditions (and to what extent). Despite the financial benefit, there is nevertheless a benefit in terms of welfare for the public.
Ethical aspects of digital health 19 D
ow nloaded from
https://allaplusessays.com/order ic.oup.com
/eurpub/article-abstract/29/S upplem
ent_3/18/5628045 by 81695661, O U
P on 29 N
ovem ber 2019
According to Topol a ‘democratization of medicine’ is supported by digital health, granting individuals increased access to their medical information, which increases their freedom to direct their health more autonomously.18 What can be an advantage for some, also has to be regarded with caution so that those who are rather unable to manage their own health are not overburdened.
Dignity and autonomy
Moreover, digital health tools should only be applied when the dignity of the patient can be preserved. For instance in the case of using tele- medicine in hospital settings, the conveyance of potentially bad news to the patient should be in accordance to upholding dignity of the patient and therefore distant technologies (through using screens) should be refrained from when delivering news which put the patient in a vulnerable situation. Instead, personal and face-to-face communication is preferred to protect dignity of patients in vulnerable situations. Here, however, autonomy—in terms of patients’ choice of the communica- tion channel—can tailor the delivery of healthcare to patients’ needs. Conversely, patients who do not want to be institutionalized can stay at home longer and be better supported in their home environment by means of telemedicine. Their quality of life and dignity can thus be increased through the use of such technologies.
Although no all-encompassing account of the ethical issues sur- rounding digital health can be provided given that the field is still evolving and other questions of moral concern will be emerging, we set out issues which are pressing from a justice point of view. These concrete issues mentioned are reflected by ethical values (adapted from Royakkers et al. and extended by specifications of the Daniels framework of justice),19 which are based on Daniels’ account of justice and the discussion above. An overview of the ethical values involved in digital health and their exemplification of issues touched is provided in table 1.
What ethical guidance is needed and for whom?
In view of the array of ethical challenges arising in the application of digital health, guidance should be given. In order to clarify for whom guidance would be necessary, we first determine who are the stake- holders involved. Digital health is integrated in a complex network of different parties, involving not only the users and providers of digital health technologies and applications. While the range of providers can already vary widely, stemming from public or govern- mental sources to private companies such as app technology start-ups or pharmaceutical and medical device companies, other stakeholders comprise doctors, who are responsible for providing medical programme planning and realization, and researchers for data analytics. Further stakeholders are insurers, government entities, non-governmental organizations and society in general, who are usually the implementation partners for digital health inter- ventions. Here, it remains crucial, that exploitation of end user data is prevented and data are only used for purposes, of which end users are aware of when consenting to their data collection. Surveillance of data by governments and screening data by insurance companies to
reject people’s applications have to be avoided at all costs. Underlying values and current as well as expected governance mechanisms need to be systematically addressed in order to increase the adoption by end users or patients.
Herein, we deem two values to be key for establishing digital health interventions in line with Daniels’ account of justice on a broader scale: trust and empowerment. In previous digital health initiatives, as for example in the 100,000 Genomes Project or initiatives by the Academy of Medical Sciences,20 it was shown that building and main- taining trust among multiple communities was experienced as a main challenge. Focusing on raising awareness among and engaging the public, as well as building trust through open communication, a common language, ongoing conversation and partnership are considered important. By such open communication and partner- ships, users are subsequently empowered. Another mechanism to empower users is to foster digital literacy. Digital literacy refers to the ‘capabilities and understanding required to allow an individual to effectively engage with a data-driven technology or the processes that surround its use’.20 When users are empowered and capable to use digital health technologies accurately, it can be regarded as a chance to realize their individual opportunities to health in line with Daniels’ account of justice.5
In general, all stakeholders resume responsibilities, rights and duties which are building an interdependent network. Especially those stakeholders who develop and provide digital health interven- tions inherit a special role with regard to ensuring a just use and implementation of digital health technologies. Attending to those responsibilities also increases their trustworthiness. Furthermore, trustworthiness can be increased by procedural values guiding governance for service providers and can emphasize what to focus on for fair digital health provision. The most important procedural values for enabling fair digital health are transparency, accountability and inclusiveness. Transparency about structures, underlying algorithms for digital health tools and stakeholders involved and their interests empowers users to make better-informed decisions about engaging with such interventions. Accountability structures and mechanisms can ensure that responsibilities can be traced and held accountable for.
21 Last, but maybe most importantly, inclusive-
ness should be a key element in setting up digital health interven- tions, so that inequities are diminished.
Policy and practice initiatives to foster ethical digital health
In view of the magnitude of ethical issues emerging with the appli- cation of digital health technology, policy initiatives are needed, which specifically address those concerns. Recently the ethical dimension gained increasing attention in the policy field as moral questions of artificial intelligence and underlying algorithms were publicly discussed and the need for regulation was expressed. At the EU level, this was met when the ‘Ethics guidelines for trustworthy AI’ were published in April 2019 by the independent high-level expert group set up by the European Commission. It puts forward seven premises or values to be met by AI technologies to be trust- worthy, which are (i) human agency and oversight, (ii) technical
Table 1 Overview of ethical values of digital health and exemplification of issues involved (adapted from Royakkers et al., 2018 and adjusted based on Daniels, 2008 and discussion above)5,19
Ethical values Exemplification of issues involved
Justice Equity in access, exclusion, equal treatment, non-discrimination, non-stigmatization, data ownership, empowerment
Autonomy Freedom of choice, informed consent, awareness of data collection and use, right to (not) know results
Privacy Data protection, confidentiality, data sharing, intended/unintended uses of data
Security Data storage, safety of information, protection against unauthorized access and use of data
Responsibilities Trust, balance of power, relation between stakeholders (e.g. user–government–provider), benefits and benefit sharing, data ownership
Procedural values Transparency, accountability, inclusiveness
20 European Journal of Public Health D
ow nloaded from
https://allaplusessays.com/order ic.oup.com
/eurpub/article-abstract/29/S upplem
ent_3/18/5628045 by 81695661, O U
P on 29 N
ovem ber 2019
robustness and safety, (iii) privacy and data governance, (iv) trans- parency, (v) diversity, non-discrimination and fairness, (vi) societal and environmental well-being and (vii) accountability.22
With regard to digital health specifically, the WHO concurrently released their above-mentioned ‘Recommendations on digital inter- ventions for health system strengthening’, which assess the benefits, harms, acceptability, feasibility, resource use and equity consider- ations of digital health interventions.1 Whereas the WHO website pronounces that ‘digital health interventions are not a substitute for functioning health systems, and that there are significant limitations to what digital health is able to address’,
23 such view is not balanced
enough and slightly too pessimistic in our view. Instead, we support that digital health interventions and technologies should rather be seen as a useful addition to non-digital healthcare provision.
In order to support practice, WHO also implemented the Digital Health Atlas—an online platform to collect, monitor and coordinate digital health initiatives worldwide—and announced to establish a section on digital health ‘to enhance WHO’s role in assessing digital technologies and support Member States in prioritizing, integrating and regulating them’.23
Also, in the related field of regulating health data—often referred to as big data—criteria and proposals were developed: in 2016, the OECD Recommendation on Health Data Governance set out the need to establish a health data governance framework to encourage the use of personal health data to serve health-related public interest, privacy and security.
24 What these developments in
policy have in common, is that they reflect the need for guidelines and regulations for dealing with digital health technologies. Whereas the EU General Data Protection Regulation can be seen as a first binding legal step toward protecting data privacy,25 other questions are still unaddressed, for instance clear and universal regulations on who has ownership of collected data. Establishing regulations to manage the handling of digital health technologies and big data not only fosters users’ trust in digital health and thus adoption of it, but it can also contribute to a fair application and use of digital health. As long as digital health can be offered in a fair manner, its opportunities can exceed the challenges.
An initiative from outside the European Region, which could hold lessons for future action in Europe is the Montréal Declaration for a Responsible Development of Artificial Intelligence that has been launched in 2017.
26 Its strength is that the declaration (which was
published one year later in 2018) was developed by means of a public deliberation process, which involved over 500 citizens, experts and stakeholders from diverse backgrounds. Such initiative that involves citizens offers transparent policy-making that is also in line with Daniels’ approach to justice and accountability for reasonableness and should inform future European initiatives to foster ethical digital health.
In line with the aim of this article, this general discussion high- lighted ethical chances and challenges from a justice point of view, which need to be taken more into consideration than is currently the case to ultimately design and implement fair policies. Justice and related ethical concepts within digital health are underdiscussed in the academic literature so far and overlooked in practice. Especially from a public health point of view where the anticipated and real impacts of these innovative technologies on population health and health equity are investigated, ethical analysis to further identify and remedy violations of justice, respect for autonomy and other key values outlined above needs to be adopted as a necessary and integral aspect of all research.
Conclusion
Digital health technologies offer opportunities to reshape health systems by broadening health coverage and spreading health infor- mation and literacy. Moreover, healthcare costs can potentially be reduced and efficiency can be enhanced. Yet, digital health
technologies also catalyze challenges with regard to digital illiteracy, resulting inequities in access and informed consent, which need to be met. Hence, it is crucial for all stakeholders, especially digital health providers, to ensure that the digital health interventions are designed and set up in an ethical and fair way, thus fostering equity in access and fair equality of opportunity for all population groups and taking into account the needs of disadvantaged groups. If the awareness of these ethical challenges is existent and designers of digital health are held accountable to act according to these considerations when designing and implementing digital health technology, digital health can be an opportunity for everyone. Digital health should improve the fair and just access to health prevention and care; and if this is guaranteed, digital health has the opportunity of ‘only’ improving healthcare and public health, as other innovations in the past as well. Then it should be regarded as ‘just digital health’.
Funding
No external funding has been received.
Conflicts of interest: None declared.
Key points
� Fair and equitable access to digital health technologies and interventions offers chances to healthcare coverage, spread of health information and literacy, and potentially efficiency of care. � The diversity and range of stakeholders in digital health calls
for a clear demarcation of each stakeholders’ specific responsibilities in assuring an ethical and fair digital health. � Regulations and policies focusing on ethical guidance are
needed to foster fair, equitable and trustworthy digital health aiming to empower users.
References
1 World Health Organisation. WHO Guideline: Recommendations on Digital
Interventions for Health System Strengthening. Geneva: World Health Organisation,
2019. Available from: https://allaplusessays.com/order
interventions-health-system-strengthening/en/ (22 May 2019, date last accessed).
2 Kickbusch I. Health promotion 4.0. Health Promot Int 2019;34:179–81.
3 Beauchamp D. Public health as social justice. In: Beauchamp DE, Steinbock B, editors.
New Ethics for the Public’s Health. New York: Oxford University Press, 1999: 105–14.
4 O’Neill O. Towards Justice and Virtue. Cambridge: Cambridge University Press, 1996.
5 Daniels N. Just Health: Meeting Health Needs Fairly. Cambridge: Cambridge
University Press, 2008.
6 Daniels N. Accountability for reasonableness. BMJ 2000;321:1300–1.
7 Sreenivasan G. Justice, inequality, and health. In: Zalta EN, editor. The Stanford
Encyclopedia of Philosophy [Internet]. Available from: https://allaplusessays.com/order
archives/fall2018/entries/justice-inequality-health/ (10 August 2019, date last accessed).
8 Rid A, Biller-Andorno N. Justice in action? Introduction to the mini symposium on
Norman Daniels’ just health: meeting health needs fairly. J Med Ethics 2009;35:1–2.
9 Tayler HA. Incorporating ethics into teaching health policy analysis. In: Strech D,
Hirschberg I, Marckmann G, editors. Ethics in Public Health and Health Policy.
Concepts, Methods, Case Studies. Dordrecht: Springer, 2013: 83–92.
10 Rid A. Just health: meeting health needs fairly. Bull World Health Organ
2008;86:653.
11 Jadad AR, Fandiño M, Lennox R. Intelligent glasses, watches and vests. . .oh my!
Rethinking the meaning of ‘‘harm’’ in the age of wearable technologies. JMIR
mHealth uHealth 2015;3:e6.
12 Kirigia JM, Seddoh A, Gatwiri D, et al. E-health: determinants, opportunities,
challenges and the way forward for countries in the WHO African Region. BMC
Public Health 2005;5:137.
Ethical aspects of digital health 21 D
ow nloaded from
https://allaplusessays.com/order ic.oup.com
/eurpub/article-abstract/29/S upplem
ent_3/18/5628045 by 81695661, O U
P on 29 N
ovem ber 2019
13 Chauvin J, Rispel L. Digital technology, population health, and health equity.
J Public Health Policy 2016;37:145–53.
14 AI Ethics Lab. https://allaplusessays.com/order (30 May 2019, date last accessed).
15 Perakslis ED. Using digital health to enable ethical health research in conflict and
other humanitarian settings. Confl Health 2018;12:1–8.
16 Brall C, Maeckelberghe E, Porz R, et al. Research ethics 2.0: new perspectives on
norms, values, and integrity in genomic research in times of even scarcer resources.
Public Health Genomics 2017;20:27–35.
17 Scassa T. Data ownership. CIGI Papers No. 187; Ottawa Faculty of Law Working
Paper No. 2018-26. Sep. 2018. Available from https://allaplusessays.com/order
(30 May 2019, date last accessed).
18 Topol E. A discussion with digital health pioneer Dr. Eric Topol. CMAJ 2013;185:E597.
19 Royakkers L, Timmer J, Kool L, van Est R. Societal and ethical issues of digitization.
Ethics Inf Technol 2018;20:127–42.
20 The Academy of Medical Sciences. Our data-driven future in healthcare. People and
partnerships at the heart of health related technologies. Nov. 2018. Available from:
https://allaplusessays.com/order (22 May 2019, date last accessed).
21 Vayena E, Haeusermann T, Adjekum A, Blasimme A. Digital health: meeting the
ethical and policy challenges. Swiss Med Wkly 2018;148:w14571.
22 European Commission. Ethics guidelines for trustworthy AI. Report by the High-
Level Expert Group on Artificial Intelligence. Apr. 2019.
23 WHO. WHO guideline: recommendations on digital interventions for health system
strengthening. https://allaplusessays.com/order
ventions-health-system-strengthening/en/ (22 May 2019, date last accessed).
24 OECD. Recommendation of the Council on Health Data Governance. 2016. Report
No. OECD/LEGAL/0433.
25 Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April
2016 on the protection of natural persons with regard to the processing of personal data
and on the free movement of such data, and repealing Directive 95/46/EC (General Data
Protection Regulation). Available from: https://allaplusessays.com/order (29 May
2019, date last accessed).
26 Montreal Declaration for a responsible development of Artificial Intelligence.
Université de Montréal, Canada. https://allaplusessays.com/order.
com/context (7 August 2019, date last accessed).
22 European Journal of Public Health D
ow nloaded from
https://allaplusessays.com/order ic.oup.com
/eurpub/article-abstract/29/S upplem
ent_3/18/5628045 by 81695661, O U
P on 29 N
ovem ber 2019
© 2019 European Public Health Association. Copyright of European Journal of Public Health is the property of Oxford University Press / USA and its content may not be copied or emailed to multiple sites or posted to a listserv without the copyright holder’s express written permission. However, users may print, download, or email articles for individual use.
"Get
15%discount on your first
3 orderswith us"
Use the following coupon
"FIRST15"












Other samples, services and questions:
When you use PaperHelp, you save one valuable — TIME
You can spend it for more important things than paper writing.