Search here:

The key ingredient in your recruitment

Blogs & News

Google Engineers ‘Mutate’ AI to Make It Evolve Systems Faster Than We Can Code Them

main article image

 Read More

  • Much of the work undertaken by artificial intelligence involves a training process known as machine learning, where AI gets better at a task such as recognising a cat or mapping a route the more it does it. Now that same technique is being used to create new AI systems, without any human intervention.

     

    For years, engineers at Google have been working on a freakishly smart machine learning system known as the AutoML system (or automatic machine learning system), which is already capable of creating AI that outperforms anything we’ve made.

    Now, researchers have tweaked it to incorporate concepts of Darwinian evolution and shown it can build AI programs that continue to improve upon themselves faster than they would if humans were doing the coding.

    The new system is called AutoML-Zero, and although it may sound a little alarming, it could lead to the rapid development of smarter systems – for example, neurally networked designed to more accurately mimic the human brain with multiple layers and weightings, something human coders have struggled with.

    “It is possible today to automatically discover complete machine learning algorithms just using basic mathematical operations as building blocks,” write the researchers in their pre-print paper. “We demonstrate this by introducing a novel framework that significantly reduces human bias through a generic search space.”

    The original AutoML system is intended to make it easier for apps to leverage machine learning, and already includes plenty of automated features itself, but AutoML-Zero takes the required amount of human input way down.

     

    Using a simple three-step process – setup, predict and learn – it can be thought of as machine learning from scratch.

    The system starts off with a selection of 100 algorithms made by randomly combining simple mathematical operations. A sophisticated trial-and-error process then identifies the best performers, which are retained – with some tweaks – for another round of trials. In other words, the neural network is mutating as it goes.

    When new code is produced, it’s tested on AI tasks – like spotting the difference between a picture of a truck and a picture of a dog – and the best-performing algorithms are then kept for future iteration. Like survival of the fittest.

    And it’s fast too: the researchers reckon up to 10,000 possible algorithms can be searched through per second per processor (the more computer processors available for the task, the quicker it can work).

    Eventually, this should see artificial intelligence systems become more widely used, and easier to access for programmers with no AI expertise. It might even help us eradicate human bias from AI because humans are barely involved.

    Work to improve AutoML-Zero continues, with the hope that it’ll eventually be able to spit out algorithms that mere human programmers would never have thought of. Right now it’s only capable of producing simple AI systems, but the researchers think the complexity can be scaled up rather rapidly.

    “While most people were taking baby steps, [the researchers] took a giant leap into the unknown,” computer scientist Risto Miikkulainen from the University of Texas, Austin, who was not involved in the work, told Edd Gent at Science. “This is one of those papers that could launch a lot of future research.”

    The research paper has yet to be published in a peer-reviewed journal, but can be viewed online at arXiv.org.

After Covid-19: Anticipate the skills gap with skills mapping

After Covid-19: Anticipate the skills gap with skills mapping

It is close to impossible to predict how the world will change after the Covid-19 pandemic crisis is over and how people will evolve. 

 Read More

But one thing is certain, the way we work will be deeply transformed (and already has!). And with transformation comes new skills to master and new roles to create.

How cutting edge materials are shaping the future of 3D printing

The latest edition of the AM Focus 2020 Series addresses the cutting edge of additive manufacturing materials today: technical ceramics, continuous fibre-filled composites, refractory metals and high-performance polymers.

 Read More

 

How cutting edge materials are shaping the future of 3D printing

3dpbm has announced the fourth edition of the company’s AM Focus eBook series. This latest publication spotlights a far-reaching topic within the additive manufacturing industry: advanced materials.

3dpbm sees this group as creating numerous opportunities across the aerospace, automotive, defence, medical, electronics and dental industries, among others.

Comprising over 50 pages, this eBook provides an analysis of the advanced materials landscape — with market insights from leading industry analyst group SmarTech Analysis — as well as a glimpse into the work of some of the leading companies in ceramics and advanced composites.

Special features zoom in on segment leaders and pioneers such as 3DCERAM SINTO, 9T Labs and Lithoz. These companies, like most that work in advanced materials for AM, are driven by innovation and have a unique vision for the future of additive.

The eBook also addresses many of the most relevant materials and AM hardware companies that are pushing the boundaries of ceramics, composites, refractory metals and advanced polymers.

  • Ceramics companies include 3DCERAM, ExOne, Lithoz, voxeljet and XJet.
  • Composites companies include 9T Labs, Anisoprint, Arevo and Markforged.
  • Refractory metal companies include HC Starck, Heraeus and LPW/Carpenter Technology.
  • High-performance polymer companies include EOS, OPM, Roboze, SABIC, Solvay and Stratasys.

“We at 3dpbm have been eagerly following the progress of advanced materials in additive manufacturing for several years, and we are pleased to put the segment in the spotlight in a new, reader-friendly way,” commented 3dpbm’s editor in chief Tess Boissonneault. “We expect this to be one of our most exhaustive publications on this topic to date and are already looking forward to the next eBook edition, which will focus on sustainability in 3D printing, another topic we—and the AM industry at large—care very much about.”

3dpbm’s eBook can be viewed or downloaded on the 3dpbm website here. To maintain an open discourse and open access to AM news, the publication is free to access.

Source: Aerospace Manufacturing

Medtech firm Enmovi secures £2.5M for wearable tech research 

With the number of women employed in the digital workforce hovering around 17% for the past decade, more needs to be done to diversify the industry

 Read More

 

(L-R) Minister for Trade, Investment and Innovation Ivan McKee and EnMovi CEO Roman Bensen

MedTech firm EnMovi has secured a £2.5m grant from Scottish Enterprise to power new research and development.

The grant will allow the firm to research new wearable technology which will work alongside an app for patients. It is expected the app will provide rehabilitation guides and home exercise plans to patients as well as contact with healthcare professionals.

The Scottish firm was incorporated in October of last year, and is a subsidiary of US-based OrthoSensor and McLaren Applied, a technological offshoot of the McLaren Group which includes Formula 1 cars.

Its new research and development base at the University of Strathcylde’s Inovo building is expected to create 19 jobs.

Trade, investment and innovation minister Ivan McKee said: “This funding will support EnMovi to capture data and develop wearable technology.

“This will allow for less invasive surgery and faster recovery times for patients.

“This project, which will see a new research and development centre established at the University of Strathclyde’s Inovo building, also brings exciting employment opportunities and will help establish Scotland at the forefront of research into this cutting-edge new technology.”

Source: Business Cloud

 

Ten years on, why are there still so few women in tech?

With the number of women employed in the digital workforce hovering around 17% for the past decade, more needs to be done to diversify the industry

Read More

7 Artificial Intelligence Trends to Watch in 2020

 

Here we explore some of the main AI trends predicted by experts in the field. If they come to pass, 2020 should see some very exciting developments indeed.

 Read More

Artificial Intelligence offers great potential and great risks for humans in the future. While still in its infancy, it is already being employed in some interesting ways. 

Here we explore some of the main AI trends predicted by experts in the field. If they come to pass, 2020 should see some very exciting developments indeed.

What are the next big technologies?

ai trends 2020 data
Source: Geralt/Pixabay

According to sources like Forbes, some of the next “big things” in technology include, but are not limited to: 

  • Blockchain
  • Blockchain As A Service
  • AI-Led Automation
  • Machine Learning
  • Enterprise Content Management
  • AI For The Back Office
  • Quantum Computing AI Applications
  • Mainstreamed IoT

What are some of the most exciting AI trends?

According to sources like The Next Web, some of the main AI trends for 2020 include: 

  • The use of AI to make healthcare more accurate and less costly
  • Greater attention paid to explainability and trust 
  • AI becoming less data-hungry
  • Improved accuracy and efficiency of neural networks
  • Automated AI development
  • Expanded use of AI in manufacturing
  • Geopolitical implications for the uses of AI

What AI trends should you watch in 2020?

Further to the above, here are some more AI trends to look out for in 2020. 

1. Computer Graphics will greatly benefit from AI

One trend to watch in 2020 will be advancements in the use of AI in computer-generated graphics. This is especially true for more photorealistic effects like creating high fidelity environments, vehicles, and characters in films and games. 

Recreating on screen a realistic copy of metal, the dull gloss of wood or the skin of a grape is normally a very time-consuming process. It also tends to need a lot of experience and patience from a human artist.

Various researchers are already developing new methods of helping to make AI do the heavy work involved in creating complex graphics. NVIDIA, for example, has already been working on this for several years.

They are using AI to improve things like ray tracing and rasterization, to create a cheaper and quicker method of rendering hyper-realistic graphics in computer games. 

Researchers in Vienna are also working on methods of partially, or even fully, automating the process under the supervision of an artist. This involves the use of neural networks and machine learning to take prompts from a creator to generate sample images for their approval. 

More from Interesting Engineering

2. Deepfakes will only get better, er, worse

Deepfake is another area that has seen massive advancement in recent years. 2019 saw a plethora of, thankfully humorous, deepfakes that went viral on many social media networks. 

But this technology will only get more sophisticated as time goes by. This opens the door for some very worrying repercussions which could potentially damage or destroy reputations in the real world. 

With deepfakes already becoming very hard to distinguish from real video, how will we be able to tell if anything is fake or not in the future? This is very important, as deepfakes could readily be used to spread political disinformation, corporate sabotage, or even cyberbullying. 

Google and Facebook have been attempting to get out ahead of the potential negative aspects by releasing thousands of deep fake videos to teach AI’s how to detect them. Unfortunately, it seems even AI has been stumped at times. 

3. Predictive text should get better and better

Predictive text has been around for some time now, but by combining it with AI we may reach a point where the AI knows what you want to write before you do. “Smart” email predictive text is already being tested on programs like Gmail, for example. 

If used correctly, this could help users speed up their writing significantly, and could be especially useful for those with physical conditions that make typing difficult. Of course, many people will find themselves typing out the full sentence anyway, even if the AI correctly predicted their intentions. 

How this will develop in 2020 is anyone’s guess, but it seems predictive text may become an ever-increasing part of our lives. 

4. Ethics will become more important as time goes by

As AI becomes ever-more sophisticated, developers will be under more pressure to keep an eye on the ethics of their work. An ethical framework for the development and use of AI could define how the human designers of AI should develop and use their creations, as well as what AI should and should not be used for. 

It could also eventually define how AI itself should behave, morally and ethically. Called “Roboethics” for short, the main concern is preventing humans from using AI for harmful purposes. Eventually, it may also include preventing robots and AI from doing harm to human beings. 

Early references to Roboethics include the work of author Isaac Asimov and his “Three Laws of Robotics”. Some argue that it may be time to encode many of Asimov’s concepts into law before any truly advanced AIs are developed. 

5. Quantum computing will supercharge AI

Another trend to watch in 2020 will be advancements in quantum computing and AI. Quantum computing promises to revolutionize many aspects of computer science and could be used to supercharge AI in the future.

Quantum computing holds out the hope of dramatically improving the speed and efficiency of how we generate, store, and analyze enormous amounts of data. This could have enormous potential for big data, machine learning, AI, and privacy. 

By massively increasing the speed of sifting through and making sense of huge data sets, AI and humanity could benefit greatly. It could also make it possible to quickly break virtually any encryption – making privacy a thing of the past. The end of privacy or a new Industrial Revolution? Only time will tell.

6. Facial recognition will appear in more places

Facial recognition appears to be en vogue at the moment. It is popping up in many aspects of our lives and is being adopted by both private and public organizations for various purposes, including surveillance. 

Artificial Intelligence is increasingly being employed to help recognize individuals and track their locations and movements. Some programs in development can even help detect individual people by analyzing their gait and heartbeat.

AI-powered surveillance is already in place in many airports across the world and is increasingly being employed by law enforcement. This is a trend that is not going away anytime soon. 

7. AI will help in the optimization of production pipelines

The droid manufacturing facility in Star Wars Episode II: The Clone Wars might not be all that far, far away. Fully autonomous production lines powered by AI are set to be with us in the not-too-distant future. 

While we are not quite there yet, AI and machine learning are being used to optimize production as we speak. This promises to reduce costs, improve quality, and reduce energy consumption for those organizations who are investing in it. 

Source: Interesting Engineering

By Christopher McFadden

Importance of Soft Skills for Software Architects

Very often software architects get a reputation for being cracks in programming and in building solid architecture but they have problems with project management or relationships with the clients.

 Read More

Soft skills are very important nowadays. Whereas hard skills can be learned and perfected over time, soft skills are more difficult to acquire and change. Actually, I would say that the importance of soft skills in your job search and overall career is greater than you think because they help facilitate human connections. Soft skills are key to building relationships, gaining visibility, and creating more opportunities for advancement. And this article is about the importance of soft skills for software architects.

The importance of soft skills. What are soft skills, and why do you need them?

You shouldn’t underestimate the importance of soft skills. Basically, you can be the best at what you do, but if your soft skills aren’t good, you’re limiting your chances of career success.

Soft skills are the personal attributes you need to succeed in the workplace. In other words, soft skills are a combination of social skills, communication skills, flexibility, conflict resolutions and problem-solving skills, critical thinking skills, emotional intelligence among others that enable people to effectively navigate their environment, work well with others, perform well, and achieve their goals with complementing hard skills.

Soft skills are the difference between adequate candidates and ideal candidates. In most competitive job markets, recruitment criteria do not stop at technical ability and specialist knowledge. Employers look for a balance of hard and soft skills when they make hiring decisions. For example, employers value skilled workers with a track record of getting the job done on time. Employers also value workers with strong communication skills and a strong understanding of company products and services.

Even though you may have exhaustive knowledge of your area, you will find it difficult to work with people and retain your project if you lack the soft skills of interpersonal skills and negotiation. And soft skills are not just important when facing external customers and clients, they are equally important when it comes to interacting with colleagues. Soft skills relate to how you work with others. Employers value soft skills because they enable people to function and thrive in teams and in organisations as a whole. A productive and healthy work environment depends on soft skills. After all, the workplace is an interpersonal space, where relationships must be built and fostered, perspectives must be exchanged, and occasionally conflicts must be resolved.

Essential soft skills for being a good software architect

In Apiumhub we believe that the most successful architects that we have met possess more than just great technical skills. They also have qualities that enable them to work well with people.

There are a lot of brilliant technologists who can solve just about any technical problem but are so arrogant that people despise working with them. For example, If you look at the Microsoft Architect program, you will notice that there is a set of competencies that go well beyond technical skills. These competencies were based on focus groups from companies large and small. One common theme from these focus groups was that the importance of soft skills is huge. In fact, they identified more soft competencies than technical competencies. In their view, the soft skills are what separate the highly skilled technologist from the true software architect.

The International Association of Software Architects (IASA) has also gone through a detailed analysis and polled its members to determine the skills necessary to be a successful software architect, the importance of soft skills was highlighted.

In Apiumhub, we also believe that the most successful architects we know are able to increase their effectiveness by combining their technical and nontechnical skills. And the successful technical solution requires three distinct soft skills: business alignment, perspective awareness, and communication.

Most software projects begin with some type of requirements document that drives most of the technical decisions or at least an architecture document that demonstrates how the architecture meets business needs. The issue generally is alignment at the strategic level. Usually, the software architect can discuss the business requirements, but it is surprising how often the architect cannot explain the project in terms that the CFO would understand. There is a lack of understanding of the real business drivers and the detailed financial implications versus the business requirements. It is the critical factor that drives the real project decisions. Being software architect implies thinking about your projects like a CEO and CFO. Invest the time up front to dissect the business drivers for the project, and if possible, determine the true financial impact of the costs and benefits of the project.

You need to think as an architect and not always accept clients’ demands as sometimes it is just impossible to do what you are asked to achieve. Use business drivers instead of requirements as your guide for developing the solution architecture. You need to keep an eye on business throughout the project lifecycle to maintain the appropriate flexibility in the project.

Also, you should constantly evaluate how your methodology maintains business alignment during the project life cycle. In other words, software architect should think about scalability, performance and cost reduction.

So, let’s look at the most demanded soft skills for software architects

Leadership

You need to be an example for your team, be a person they would like to be. It is also about defining and communicating vision and ideas that inspire others to follow with commitment and dedication. You need to provide direction, and to lead, you need to know where you are going and make the decisions that will get you there. Understanding people is key here as you need to know how to explain your decisions.

Communication

In our opinion, communication is the most important soft skill! Whether it is oral or written communication skills. This means being able to actively listen to others and explain your ideas writing and verbally to an audience that way that achieves the goals you intended with that communication. Communications skills are critical as for internal teams as for dealing with clients. And communication is also an important aspect of leadership since leaders must be able to delegate clearly and comprehensively.

System thinking

Understand decisions and constraints in the wide scope. It involves the techniques and thinking processes essential to setting and achieving the business’s short term and long term priorities and goals. And you decisions should be aligned with the overall business of the company.

Flexibility

It is about adaptability, about willing to change, about lifelong learning, about accepting new things. Really, don’t underestimate the ability to adapt to changes. In today’s rapidly evolving business environment, the ability to pick up on new technologies and adjust to changing business surroundings is critically important. Flexibility is an important soft skill, as it demonstrates an ability and willingness to acquire new hard skills and an open-mindedness to new tasks and new challenges.

Interpersonal skills

It is all about cooperation, about getting along with others, being supportive, helpful, collaborative. You should be effective at building trust, finding common ground, having emotional empathy, and ultimately building good relationships with people at work and in your network. People want to work with people they like, or think they’ll like – people who are easygoing, optimistic, and even fun to be around regardless of the situation. Because at the end of the day if you can’t connect with someone, then you will never be able to sell your idea – no matter how big or small it may be.

Positive attitude

Positive attitude? It means being optimistic, enthusiastic, encouraging, happy, confident.

This soft skill can be improved by offering suggestions instead of mere criticism, being more aware of opportunities and complaining less. Experience shows that those who have a positive attitude usually have colleagues that are more willing to follow them. People will forget what you did, but people will never forget how you make them feel.

Responsibility

You should be accountable, reliable, get the job done, self-disciplined, you should want to do well. Don’t forget that you will be an example for others.

Sharing knowledge skills

Working in a team means helping each other, means sharing knowledge, Companies don’t want to have a brilliant software architect who is never ready to share his knowledge with others. Sharing knowledge you grow your team of high-quality tech experts.

Critical Thinking

The ability to use reasoning, past experience, research, and available resources to fundamentally understand and then resolve issues. For example, Bill Gates reads 50 books each year, most of them nonfiction and selected to help him learn more about the world. Critical thinking involves assessing facts before reaching a conclusion. Software architects are sometimes faced with a handful of possible solutions, and only critical thinking will allow them to quickly test each scenario mentally before choosing the most efficient one.

Organization

Planning and effectively implementing projects and general work tasks for yourself and others is a highly effective soft skill to have.

Proactivity

Employers are looking for employees that take initiative, are reliable. Sometimes, CEOs don’t have the time to think about tech issues, so software architect should take an initiative and cover tech area of the business.

Problem solving

Employers want professionals who know how and when to solve issues on their own, and when to ask for help. Problem-solving does not just require analytical, creative and critical skills, but a particular mindset: those who can approach a problem with a cool and level head will often reach a solution more efficiently than those who cannot. This is a soft skill which can often rely on strong teamwork too. Problems need not always be solved alone. The ability to know who can help you reach a solution, and how they can do it, can be a great advantage. It is also about being able to coordinate and solicit opinions and feedback from a group with diverse perspectives to reach a common, best solution.

Time management

Time management is more than just working hard. It means making the most of each day and getting the most important things done first, priorities. If necessary, the ability to delegate assignments to others when needed is a part of it. Many jobs come with demanding deadlines and occasionally high stakes. Recruiters and clients prize candidates who show a decisive attitude, an unfaltering ability to think clearly, and a capacity to compartmentalize and set stress aside.

Never stop learning

Learning is a never-ending process. There is always someone you can learn from and some abilities you can improve or adjust. What matters is your willingness to learn.

There is a very good article about soft skills written by wikijob, which actually inspired us to write an article about soft skills specifically for a software architect. And in conclusion, I want to say that every software architect should understand the importance of soft skills. Software Architect should find a balance between hard skills and soft skills to be truly good in what they are doing and how they are doing it.

 By EKATERINA NOVOSELTSEVA

Source: APIUMHUB

Total Quality and the Meaning of Data Integrity

Due to their direct impact on the health and even lives of patients, quality control and quality assurance are of paramount importance in the Life Sciences industry.

 Read More

Due to their direct impact on the health and even lives of patients, quality control and quality assurance are of paramount importance in the Life Sciences industry. All stakeholders; government agencies, manufacturers, distributors and healthcare professionals, therefore, take the issue of quality very seriously. We asked 5 questions to Alban van Landeghem, Sr. Life Sciences Business Consultant at Dassault Systèmes about Total Quality, a concept of high priority for Life Sciences solutions at Dassault Systèmes.

What is Total Quality?

“Total Quality[1] is a systemic view over the quality of the product and all processes related to the entire development and production process,” says Van Landeghem.  “In the Life Sciences industry, everyone considers the end-user of a drug or device:  the patient. In order to provide the patient with the best possible supportive care, each step in the therapeutic  solution lifecycle from development, manufacturing, distribution and management of bulk, intermediate or even final materials needs to be considered.”  Total Quality is the integrated framework that inserts controls and best practices at each step of the product lifecycle.

This total quality objective has one goal: provide the best efficacy and safety for the manufacturing of medical solutions, such as described in the International Council for Harmonisation[2] standards.

While Total Quality concepts have been developed first in the automotive industry[3] for understandable reasons, pharmaceutical quality is of equal if not even more importance.  “Users choose to drive a car knowing the risks, patients suffer their illness without being able to choose their drugs”.  Therefore, a systemic quality approach guarantees a level of confidence of appropriate efficacy and safety as accepted by market authorization.

What does Total Quality consist of?

Total Quality consists of the control of all elements, processes and behaviours that lead to the manufacturing of the product as designed and as registered. “For years, the Life Sciences industry has been very committed to delivering the best in class products to patients. To do so, standards and guidelines are continuously improved. For example; drug manufacturing guidelines such as Good Manufacturing Practices[4] [5]( GMPs)  advise companies on how to manage their quality systems, but also how to maintain their human resources knowledge and know-how.”

Manufacturers need to consider the importance of having the right people in the correct role, in the correct location, with the required knowledge and training.  GMPs also address buildings and facilities configurations: control of energy used in production, sanitation, calibration and validation of equipment.

On top of this comes the required records and documentation. Any process needs to be described in standard operating procedures (SOPs) and Work Instructions (WI) in order to convey only the best in class practices. Any product (bulk, intermediary, final) being released needs to be checked through quality control processes. As quality control was typically processed at the end of operations, new approaches considering Quality by Design can involve Process Analytical Technology tools as well as allowing control to take place earlier in the process.

Once established, these quality control activities need to be audited and tracked to ensure products suit the established specifications in the marketing authorisation file.

The reason “Total Quality” is so comprehensive in pharmaceutical development is because drugs should be manufactured as designed and as registered. “And it is important to note that Total Quality extends to the whole lifecycle of the drug,”

What is needed for Total Quality?

“To streamline Total Quality, Life Sciences companies need, above all, a complete and 360° vision of their enterprise operations. Dassault Systèmes delivers management of quality processes through the 3DEXPERIENCE platform, which helps users answer the specific needs of several industries – Life Sciences included”.

“The 3DEXPERIENCE Platform provides roles for collaboration, design, simulation and real-world analysis that will help, when properly deployed and installed, the final decision-maker to envision, based on digital experiences, the best decisions to take. The 3DEXPERIENCE Compass will indeed guide the decision-maker in its quality journey.”

What is the role of Data Integrity in Total Quality?

Data integrity[6] is not a new concept especially not in Life Sciences. Data integrity was as important for paper-based records as it is for electronic records. Data integrity refers to the completeness, consistency, and accuracy of data.

It is a minimal requirement to guarantee trust among regulatory bodies, the Life Sciences Industry ecosystem and not least patient communities.

These data integrity requirements are applicable to each step of the development of a drug or therapeutic device, from research, lab development and experimentation, manufacturing, distribution, and finally, administration.

Data Integrity processes are a prerequisite to ensure the end-user that drugs or devices are manufactured by processes and without the corruption of data, delivering the exact benefits as expected and approved by regulatory bodies.

Data Integrity plays a very important role in gaining the trust of regulatory bodies. “For one, involved IT solutions need to be able to guarantee there has not been any information breach and that all data has been inputted by authorised staff.” Data also needs to follow ALCOA guidelines: attributable, legible, contemporaneous, original and accurate. “So you need to know who recorded the data, it needs to be readable, recorded at the time the work is performed, and in the right protocol from the primary data source, and of course complete and free from error,” Van Landeghem summarises.

How do you guarantee Data Integrity?

Industry standards like ALCOA guidelines have consequences for the use of technology in quality control. “The platform needs to be controlled, verified, validated and secure,” Van Landeghem says. “That means strong password protection. It should also take into account the stage of development provided by R&D departments.”

By Dassault Systems Blogs3ds

Source: Alban Van Landeghem

A national strategy on the sharing of NHS data with industry

The UK Government has announced a new framework on how the NHS shares data with researchers and innovators, and a new National Centre of Expertise to provide specialist advice and guidance to the NHS on agreements for use of data.

 Read More

The Department of Health and Social Care’s guidance published on Monday provides a framework that aims to help the NHS realise benefits for patients and the public where the NHS shares data with researchers.

NHSX to host Centre of Expertise

The Centre will:

  • Provide commercial and legal expertise to NHS organisations – for potential agreements involving one or many NHS organisations, such as cross-trust data agreements or those involving national datasets.
  • Provide good practise guidance and examples of standard contracts and methods for assessing the “value of different partnership models to the NHS”.
  • Signpost NHS organisation to relevant expert sources of guidance and support on matters of ethics and public engagement, both within the NHS and beyond.
  • Build relationships and credibility with the research community, regulators, NHS and patient organisations, including developing “insight into demand” for different datasets and “opportunities for agreements that support data-driven research and innovation”.
  • Develop benchmarks for NHS organisations on “what ‘good’ looks like in agreements involving their data”, and setting standards on transparency and reporting.

The full policy framework document underpinning the Centre’s functions will be published later this year, with plans to recruit for the Head of the Centre over the coming months to enable it to commence work this year.

Five principles to support the use of data-driven innovations in the NHS

  • Any use of NHS data must have an “explicit aim” to improve the health, welfare and/or care of patients in the NHS, or the operation of the NHS, such as the discovery of new treatments, diagnostics and other scientific breakthroughs. And where possible, the terms of any arrangements should include quantifiable and explicit benefits for patients which will be realised as part of the arrangement.
  • NHS organisations entering into arrangements involving their data, individually or as a consortium, should ensure they “agree fair terms for their organisation and for the NHS as a whole”. Boards of NHS organisations “should consider themselves ultimately responsible for ensuring that any arrangements entered into by their organisation are fair”.
  • NHS organisations “should not enter into exclusive arrangements for raw data held by the NHS, nor include conditions limiting any benefits from being applied at a national level, nor undermine the wider NHS digital architecture, including the free flow of data within health and care, open standards and interoperability”.
  • Any arrangements agreed by NHS organisations should be transparent to support public trust and confidence in the NHS.
  • Any arrangements agreed by NHS organisations should fully adhere to all applicable national-level legal, regulatory, privacy and security obligations, including the National Data Guardian’s Data Security Standards, the General Data Protection Regulation and the Common Law Duty of Confidentiality.

Data agreements going forward

This latest iteration of the principles should be factored into decisions taken by the NHS and partners when entering into data agreements.

But NHS organisations are reminded that agreements should not be entered into which “grant one organisation sole (exclusive) right of access to or use of raw NHS data, either patient or operational data”.

The principles are intended to cover two types of agreements those:

  • involving data entered into by all NHS organisations, at the primary (GPs), secondary and tertiary care levels, including relevant data from organisations contracted and funded to deliver NHS services; and
  • involving a commercial partner or where the outputs could be commercialised, regardless of the type of organisation the NHS is partnering with.

The department plans to publish another iteration of the principles in a new policy framework later this year.

By Stuart Knowles

Source: Mills & Reeve

A new dawn in engineering and innovation as UCLan’s new centre is launched

A new dawn in engineering and innovation has been unveiled today as UCLan’s Engineering Innovation Centre is officially launched.

 Read More

Looming large on the Preston skyline, the £35m teaching and research facility engages directly with industry and provides students with real-world experience on live, engineering-related projects.

The aim of the Engineering Innovation Centre (EIC) is to provide courses which respond to industry demand to improve productivity across the North West.

The strategy aims to support the innovation needs of 1,300 regional small and medium enterprises now and in the future.

The EIC will act as one of the driving forces behind the Lancashire Industrial Strategy as well as national industrial strategy, addressing the need for innovation and producing the next generation of world-class engineers.

Research and teaching facilities include a 3D printing lab, an advanced manufacturing workshop, an intelligent systems facility, a motorsports and air vehicles lab, a high-performance computing lab, a flight simulator suite as well as a fire, oil and gas facility.

Read more: Preston’s university pledges to put city at its heart

To date, the EIC is the largest single investment in Lancashire’s educational infrastructure establishing UCLan as one of the UK’s leading universities for engineering innovation.

Identified as a signature project within Lancashire’s Strategic Economic Plan, the EIC secured £10.5 million worth of funding via the Lancashire Enterprise Partnerships’ Growth Deal with the Government.

Read more: Turkish BAE interns given tour of Preston

Professor Graham Baldwin, vice-chancellor at UCLan, said: “The provision of practice-based learning has always been a strength of this University and now, through the EIC and our links with industry, we will ensure our students gain exposure to even greater levels of applied, real-world learning.

“Our strategy is to ensure the University is at the forefront of future skills development enabling Lancashire and the North West region to lead the new ‘digital’ industrial revolution which is now upon us.”

The new facility has also received £5.8 million from the European Regional Development Fund (ERDF) and £5 million from HEFCE’s STEM Capital Fund.

The EIC forms part of the University’s £200 million Masterplan, which also includes a new student support centre, improvements to the public realm and highways around the Adelphi roundabout as well as new social spaces facilities and a new multi-faith centre, all at the Preston campus.

Working in partnership together, SimpsonHaugh and Reiach and Hall Architects designed the EIC, which was built by main contractor, BAM Construction.

David Taylor, pro-chancellor and chair of the University Board, added: “The EIC is not only a significant asset to the University but also the county, wider region and the UK.

“It will act as one of the driving forces behind the industrial strategy both on a regional and national scale while cementing Lancashire’s position as a national centre of excellence for aerospace, advanced engineering and manufacturing.”

Minister for the Northern Powerhouse and Local Growth, Rt Hon Jake Berry MP, said: “We are committed to boosting economic growth across the Northern Powerhouse and levelling up every place in the UK as we prepare to leave the EU on 31 October.

“Thanks to £10.5 million of investment from the Government’s Local Growth Fund, the University of Central Lancashire’s flagship Engineering Innovation Centre will play an important role in cementing the North’s long-standing reputation for world-class further education, scientific innovation and engineering excellence.

“The advances made and skills learned at this pioneering facility will have far-reaching benefits from equipping young people for well paid, highly skilled jobs to technological advances supporting manufacturing businesses throughout the North and around the world.”

Steve Fogg, Chair of the Lancashire Enterprise Partnership, added: “The LEP has invested £10.5m Growth Deal funding towards creating this world-class centre of excellence for high technology manufacturing which will support innovation in local businesses and supply the skilled and talented engineers they need to grow and succeed.

Lancashire is already the country’s number one region for aerospace production and advanced manufacturing.

“By funding projects like the EIC, the development of the Advanced Manufacturing Research Centre at the Samlesbury Aerospace Enterprise Zone and new education and training facilities across the county, the LEP is investing in the facilities and the skilled workforce of the future needed for the sector to maintain and build on its leading position, compete on the global stage and take advantage of opportunities in emerging markets.

“Our £320m investment programme is supporting strategically important projects like this all across Lancashire which, together, will drive substantial economic growth for years to come, create thousands of new jobs and homes and attract £1.2bn in private investment.”

By Rachel Smith

Source: Preston City Centre, UCLan Blog Preston

 

No-deal Brexit planning for life sciences businesses – new guidance and scenario planning

Brexit uncertainty remains a fact of life for business. Next week’s Parliamentary vote on the Withdrawal Agreement is unlikely to resolve matters. For the time being, planning for no-deal is set to stay with us as a time-consuming and costly distraction from other priorities.

 Read More

Clearly the regulatory issues for life sciences businesses are substantial, with most regulation based on EU law and much of it implemented through EU institutions. New guidance from Government addresses a number of areas and helps to put some of the issues in context.

Medicines regulator, the MHRA brings together relevant Government guidance and communications with industry on its website – Making a success of Brexit. The latest addition to this collection is a Further guidance note produced as a response to the consultation on draft legislation and giving more detail on the arrangements in the event of no deal. We highlight below a few points from that guidance:

Medicines Regulation

The MHRA will take on regulation for the UK market. The guidance puts forward a package of measures largely replicating the European system and offering some attractive features to maintain the UK’s competitiveness as a research and development location. The proposals include:

  • Grandfathering of Centrally Authorised Products:[1] Transitional legislation will ensure that Centrally Authorised Products will benefit from an automatic UK marketing authorisation for a limited period. Marketing authorisation holders can opt-out of this “grandfathering” process. If they wish to retain the UK MA, MAHs will have to provide baseline data for grandfathered UK MAs by 29 March 2020. Processing of variations will require at least basic baseline data to have been submitted.
  • MA assessment routes: new assessment procedures for products containing new active substances and biosimilars are planned. These will include a 67-day review for products benefiting from a positive EU CHMP opinion, a full accelerated assessment for new active substances taking no more than 150 days and a “rolling review” process for new active substances and biosimilars still in development.
  • Abridged applications would need to reference UK authorised products. However, this would include Centrally Authorised Products that had been converted to UK MAs and also unconverted Centrally Authorised Products granted before Brexit.
  • Incentives for orphan medicines will be offered, including fee refunds and waivers, and a 10-year exclusivity period. The EU’s pre-marketing orphan designation will not be replicated, as a separate UK designation is not seen as providing a substantial additional incentive for developers.
  • Data exclusivity and incentives for paediatric investigation plans and data exclusivity will largely replicate the current EU legislation, at least initially.
  • New UK-specific legal presence requirements will be introduced for holders of marketing authorisations and Qualified Persons (QPs).[2]
  • Arrangements for recognition of QP certification from EU countries are planned. Wholesalers will need to familiarise themselves with the details of this system as there are specific requirements designed to ensure public safety.
  • Some elements of the Falsified Medicines regime will fall away, as the UK is unlikely to have access to the central EU data hub recording dealings with individual packs of medicines. The future of this regime in the UK will be evaluated.
  • The UK plans to permit ongoing parallel importation of medicines authorised elsewhere in the EU where the MHRA can satisfy itself that the imports are essentially similar to a UK-authorised product. Parallel import licence holders will have to comply with new requirements such as establishing a UK base.

Medical devices

Plans for the future regulation of medical devices are less well developed. Further consultation will be carried out before changes are made. Importantly, the UK intends to track the implementation of the new EU laws on medical devices and in vitro diagnostic medical devices due to applying from May 2020 and May 2022.

The guidance recognises that UK Notified Bodies will lose their status under EU legislation in the event of no-deal. Products they have certified will no longer be validly marketed.

The UK will take steps to minimise short term disruption by continuing to allow marketing of devices in conformity with the EU legislation and also those certified by UK Notified Bodies. It will continue to recognise existing clinical investigation approvals and will not require labelling changes. New medical devices will need to be registered with the MHRA, although grace periods of up to 12 months after Brexit day are provided to give manufacturers time to comply.

Clinical trials

Much of the clinical trials system operates nationally and can continue, and the UK will continue to recognise all existing approvals. For new trials, a sponsor or legal representative could be based in the UK or in-country on an approved list – initially including all EU and EEA countries.

The UK would no longer have access to the European regulatory network for clinical trials, and pan-EU trials will presumably require an EU-based sponsor or legal representative.

The UK intends to align with the new EU Clinical Trials Regulation to the extent that it can. This is unlikely to include access to the EU clinical trials portal, but a new UK clinical trials hub will be introduced to provide a similar central information resource for UK trials.

Scenario planning for your business

These regulatory changes form just part of the picture for life science businesses, many of whom must also contend with a range of other issues. A number of our clients are already planning for different scenarios. Our experience indicates that the issues needing consideration to fall into the following categories:

  • Regulatory
  • IP
  • Contracts
  • People
  • Establishments & Structures

[1] Centrally Authorised Products are those which have been through the European Medicines Agency’s approval process resulting in a single approval for the whole EU.

[2] A Qualified Person is an experienced professional responsible for certifying that medicines comply with applicable legal requirements. 

By Isabel Teare, Senior Legal Adviser

Source: Mills & Reeve

Our Behaviour in This Pandemic Has Seriously Confused AI Machine Learning Systems

main article image

 Read More

The chaos and uncertainty surrounding the coronavirus pandemic have claimed an unlikely victim: the machine learning systems that are programmed to make sense of our online behaviour.

The algorithms that recommend products on Amazon, for instance, are struggling to interpret our new lifestyles, MIT Technology Review reports.

And while machine learning tools are built to take in new data, they’re typically not so robust that they can adapt as dramatically as needed.

For instance, MIT Tech reports that a company that detects credit card fraud needed to step in and tweak its algorithm to account for a surge of interest in gardening equipment and power tools.

An online retailer found that its AI was ordering stock that no longer matched with what was selling. And a firm that uses AI to recommend investments based on sentiment analysis of news stories was confused by the generally negative tone throughout the media.

“The situation is so volatile,” Rael Cline, CEO of the algorithmic marketing consulting firm Nozzle, told MIT Tech.

“You’re trying to optimize for toilet paper last week, and this week everyone wants to buy puzzles or gym equipment.”

While some companies are dedicating more time and resources to manually steering their algorithms, others see this as an opportunity to improve.

“A pandemic like this is a perfect trigger to build better machine-learning models,” Sharma said.

Source: The Guardian

US and UK ‘lead push against global patent pool for Covid-19 drugs’

A researcher in Beijing works on an experimental coronavirus vaccine

 Read More

Ministers and officials from every nation will meet via video link on Monday for the annual world health assembly, which is expected to be dominated by efforts to stop rich countries monopolising drugs and future vaccines against Covid-19.

As some countries buy up drugs thought to be useful against the coronavirus, causing global shortages, and the Trump administration does deals with vaccine companies to supply America first, there is dismay among public health experts and campaigners who believe it is vital to pull together to end the pandemic.

While the US and China face off, the EU has taken the lead. The leaders of Italy, France, Germany and Norway, together with the European commission and council, called earlier this month for any innovative tools, therapeutics or vaccines to be shared equally and fairly.

“If we can develop a vaccine that is produced by the world, for the whole world, this will be a unique global public good of the 21st century,” they said in a statement.

The sole resolution before the assembly this year is an EU proposal for a voluntary patent pool. Drug and vaccine companies would then be under pressure to give up the monopoly that patents allow them on their inventions, which means they can charge high prices so that all countries can make or buy affordable versions.

In the weeks of negotiations leading up to the meeting, which is scheduled to last for less than a day, there has been a dispute over the language of the resolution. Countries with major pharmaceutical companies argue they need patents to guarantee sufficiently high prices in wealthy nations to recoup their research and development costs.

Even more fraught have been attempts to reinforce countries’ existing rights to break drug and vaccine company patent monopolies if they need to for the sake of public health. A hard-fought battle over Aids drugs 20 years ago led to the World Trade Organization’s Doha declaration on trade-related intellectual property (Trips) in favour of access to medicines for all, but the US, which has some of the world’s biggest drug companies, has strongly opposed wording that would encourage the use of Trips.

Source: The Guardian

Three firmware blind spots impacting security

Built into virtually every hardware device, the firmware is lower-level software that is programmed to ensure that hardware functions properly.

 Read More

 

firmware blind spots

As software security has been significantly hardened over the past two decades, hackers have responded by moving down the stack to focus on firmware entry points. Firmware offers a target that basic security controls can’t access or scan as easily as software while allowing them to persist and continue leveraging many of their tried and true attack techniques.

The industry has reacted to this shift in attackers’ focus by making advancements in firmware security solutions and best practices over the past decade. That said, many organizations are still suffering from firmware security blind spots that prevent them from adequately protecting systems and data.

This can be caused by a variety of factors, from simple platform misconfigurations or reluctance about installing new updates to a general lack of awareness about the imperative need for firmware security.

In short, many don’t know what firmware security hazards exist today. To help readers stay more informed, here are three firmware security blind spots every organization should consider addressing to improve its overall security stance:

1. Firmware security awareness

The security of firmware running on the devices we use every day has been a novel focus point for researchers across the security community. With multiple components running a variety of different firmware, it might be overwhelming to know where to start. A good first step is recognizing firmware as an asset in your organization’s threat model and establishing the security objectives towards confidentiality, integrity, and availability (CIA). Here are some examples of how CIA applies to firmware security:

  • Confidentiality: There may be secrets in firmware that require protection. The BIOS password, for instance, might grant attackers authentication bypass if they were able to access firmware contents.
  • Integrity: This means ensuring the firmware running on a system is the firmware intended to be running and hasn’t been corrupted or modified. Features such as secure boot and hardware roots of trust support the measurement and verification of the firmware you’re running.
  • Availability: In most cases, ensuring devices have access to their firmware in order to operate normally is the top priority for an organization as far as firmware is concerned. A potential breach of this security objective would come in the form of a permanent denial of service (PDoS) attack, which would require manual re-flashing of system components (a sometimes costly and cumbersome solution).

The first step toward firmware security is awareness of its importance as an asset to an organization’s threat model, along with the definition of CIA objectives.

2. Firmware updates

The increase in low-level security research has led to an equivalent increase in findings and fixes provided by vendors, contributing to the gradual improvement of platform resilience. Vendors often work with researchers through their bug bounty programs, their in-house research teams, and with researchers presenting their work in conferences around the world, in order to conduct coordinated disclosure of firmware security vulnerabilities. The industry has come a long way enabling collaboration, enabling processes and accelerating response times towards a common goal: improving the overall health and resilience of computer systems.

The firmware update process can be complex and time consuming, and involves a variety of parties: researchers, device manufacturers, OEM’s, etc. For example, once UEFI’s EDK II source code has been updated with a new fix, vendors must adopt it and push the changes out to end customers. Vendors issue firmware updates for a variety of reasons, but some of the most important patches are designed explicitly to address newly discovered security vulnerabilities.

Regular firmware updates are vital to a strong security posture, but many organizations are hesitant to introduce new patches due to a range of factors. Whether it’s concerns over the potential time or cost involved, or fear of platform bricking potential, there are a variety of reasons why updates are left uninstalled. Delaying or forgoing available fixes, however, increases the amount of time your organization may be at risk.

A good example of this is WannaCry. Although Microsoft had previously released updates to address the exploit, the WannaCry ransomware wreaked havoc on hundreds of thousands of unpatched computers throughout the spring of 2017, affecting hundreds of countries and causing billions of dollars in damages. While this outbreak wasn’t the result of a firmware vulnerability specifically, it offers a stark illustration of what can happen when organizations choose not to apply patches for known threats.

Installing firmware updates regularly is arguably one of the most simple and powerful steps you can take toward better security today. Without them, your organization will be at greater risk of sustaining a security incident, unaware of fixes for known vulnerabilities.

If you’re concerned that installing firmware updates might inadvertently break your organization’s systems, consider conducting field tests on a small batch of systems before rolling them out company-wide and remember to always have a backup of the current image of your platform to revert back to as a precautionary measure. Be sure to establish a firmware update cadence that works for your organization in order to keep your systems up to date with current firmware protections at minimal risk.

3. Platform misconfigurations

Another issue that can cause firmware security risks is platform misconfigurations. Once powered on, a platform follows a complex set of steps to properly configure the computer for runtime operations. There are many time- and sequence-based elements and expectations for how firmware and hardware interact during this process, and security assumptions can be broken if the platform isn’t set up properly.

Disabled security features such as secure boot, VT-d, port protections (like Thunderbolt), execution prevention, and more are examples of potentially costly platform misconfigurations. All sorts of firmware security risks can arise if an engineer forgets a key configuration step or fails to properly configure one of the hundreds of bits involved.

Most platform misconfigurations are difficult to detect without automated security validation tools because different generations of platforms may have registers defined differently, there are a long list of things to check for, and there might be dependencies between the settings. It can quickly become cumbersome to keep track of proper platform configurations in a cumulative way.

Fortunately, tools like the Intel-led, open-source Chipsec project can scan for configuration anomalies within your platform and evaluate security-sensitive bits within your firmware to identify misconfigurations automatically. As a truly cumulative, open-source tool, Chipsec is updated regularly with the most recent threat insights so organizations everywhere can benefit from an ever-growing body of industry research. Chipsec also has the ability to automatically detect the platform being run in order to set register definitions. On top of scanning, it also offers several firmware security tools including fuzzing, manual testing, and forensic analysis.

Although there are a few solutions with the capability to inspect a systems’ configuration, running a Chipsec scan is a free and quick way to ensure a particular system’s settings are set to recommended values.

Your organization runs on numerous hardware devices, each with its own collection of firmware. As attackers continue to set their sights further down the stack in 2020 and beyond, firmware security will be an important focus for every organization. Ensure your organization properly prioritizes defenses for this growing threat vector, install firmware updates regularly, commit to continuously detect potential platform misconfigurations, and enable available security features and their respective policies in order to harden firmware resiliency towards confidentiality, integrity and availability.

Record fall in hiring as COVID-19 ‘wreaks havoc’

Record fall in hiring as COVID-19 'wreaks havoc'

 Read More

Hiring activity has fallen to a record low as many employers have imposed recruitment freezes in response to the COVID-19 pandemic, new research has found.

Artificial Intelligence must be regulated to stop it damaging humanity Google boss Sundar Pichai says.

Artificial intelligence must be regulated to save humanity from being hit by its dangers, Google’s boss has said.

 Read More

Google CEO Sundar Pichai speaks during a conference in Brussels on January 20, 2020

The potential damage the technology could do means it is “too important” not to be constrained, according to Sundar Pichai.

While it has the potential to save and improve lives, it could also cause damage through misleading videos and the “nefarious uses of facial recognition”, he wrote in the New York Times, calling on the world to work together to define what the future of AI should look like.

The regulation would be required to prevent AI being influenced by bias, as well as protect public safety and privacy, he said.

“Growing up in India, I was fascinated by technology. Each new invention changed my family’s life in meaningful ways. The telephone saved us long trips to the hospital for test results. The refrigerator meant we could spend less time preparing meals, and television allowed us to see the world news and cricket matches we had only imagined while listening to the short-wave radio,” he said.

“Now, it is my privilege to help to shape new technologies that we hope will be life-changing for people everywhere. One of the most promising is artificial intelligence.

“Yet history is full of examples of how technology’s virtues aren’t guaranteed. Internal combustion engines allowed people to travel beyond their own areas but also caused more accidents. The internet made it possible to connect with anyone and get information from anywhere, but also easier for misinformation to spread.”

Mr Pichai pointed to Google’s own published principles on AI and said existing rules such as the GDPR in the EU could be used as the foundation for AI regulation.

“International alignment will be critical to making global standards work. To get there, we need agreement on core values. Companies such as ours cannot just build promising new technology and let market forces decide how it will be used. It is equally incumbent on us to make sure that technology is harnessed for good and available to everyone,” he said.

He added that the tech giant wanted to work with others on crafting regulation.

“Google’s role starts with recognising the need for a principled and regulated approach to applying AI, but it doesn’t end there. We want to be a helpful and engaged partner to regulators as they grapple with the inevitable tensions and trade-offs. We offer our expertise, experience and tools as we navigate these issues together.

“AI has the potential to improve billions of lives, and the biggest risk may be failing to do so. By ensuring it is developed responsibly in a way that benefits everyone, we can inspire future generations to believe in the power of technology as much as I do.”

Google is one of the world’s most prominent AI developers – its virtual helper, the Google Assistant, is powered by the technology, and the company is also working on a number of other products, including driverless cars, which utilise AI.

Mr Pichai also revealed that Google’s own principles specify that the company will not design or deploy artificial intelligence in some situations, including those which “support mass surveillance or violate human rights”.

Source: The Independent 

Connected healthcare

Connectivity throughout healthcare is yielding huge benefits for the industry and patients alike and is a trend set to continue and accelerate.

 Read More

However, cybersecurity is an issue and while the need for a secure enterprise-level architecture is widely acknowledged, the role played by securely coded devices is easier to ignore, yet vitally important.

While patients and providers benefit from improved operational efficiency derived from the use of real-time data from a wide range of sources there is, however, a downside.

As the number of medical devices networked increases, so does the number of different points (“attack vectors”) accessible to any bad actor looking to manipulate data and cause mischief.

In 2011, the ethical hacker Barnaby Jack shone a spotlight on the issue by using modified antennae and software to demonstrate how it was possible to wirelessly attack, and take control of, Medtronic’s implantable insulin pumps and to command them to administer a fatal dose of insulin.

More recent examples demonstrate that such a direct attack remains a challenge. On August 23, 2017, for example, the Food and Drug Administration (FDA) in the US approved a firmware update to reduce the risk of patient harm due to potential exploitation of cybersecurity vulnerabilities for certain Abbott (formerly St. Jude Medical) pacemakers.

The WannaCry malware attack on the UK’s National Health Service (NHS) is another example from 2017. The malware exploited the Windows implementation of the Server Message Block (SMB) protocol to propagate, targeting MRI and CT scanners, which ran on XP workstations. These medical devices were encrypted and held for ransom, which prevented safe and effective patient treatment.

The attack was estimated to have affected more than 200,000 computers across 150 countries, with estimates of total damages ranging from hundreds of millions to billions of dollars.

Above: Mapping the capabilities of the LDRA tool suite to the guidelines of IEC 62304:2006 +AMD1:2015

Defence in depth
The diversity in the nature of attacks illustrates why no single defensive measure can ever solve the problem.

There needs to be basic housekeeping such as updating older operating systems, securing protocols, and updating and validating software and firmware. But even with those precautions, there are countless ways to attack a system and an attacker only needs a single vulnerability.

According to Professor James Reason, many aspects of medical endeavour require human input and the inevitable human error that goes with it. But generally, there are so many levels of defence that for a catastrophe to happen, an entire sequence of failures is required.
Reason likened this to a sequence of slices of ‘Swiss Cheese’, except that in his model the holes in the ‘slices’ are forever moving, closing, widening and shrinking.

Just like the checks and controls applicable to human input into medical systems, a multiple-level approach to cybersecurity makes a great deal of sense, such that if aggressors get past the first line of defence, then there are others in waiting.

Approaches and technologies that can contribute to these defences include secure network architectures, data encryption, secure middleware, and domain separation.

The medical devices deserve particular attention, however. For an aggressor, the infrastructure surrounding them is a means to an end and only the devices themselves provide the means to threaten.

Medical devices and cybersecurity
In the past, embedded medical software has usually been for static, fixed-function, device-specific applications. Isolation was a sufficient guarantee of security. The approach to cybersecurity and secure software development tended to be reactive: develop the software, and then use penetration, fuzz, and functional test to expose any weaknesses.

The practice of “patching” to address weaknesses found in the field is essentially an extension to this principle, but device manufacturers have a poor track record of delivering patches in a timely fashion.

An implicit acknowledgement of that situation, in October 2018 the MITRE Corporation and the FDA released their “Medical Device Cybersecurity” playbook consisting of four phases – preparation; detection and analysis; containment, eradication and recovery; and post-incident recovery.

Adopting a proactive approach to cybersecurity in medical devices “Preparation” is perhaps the key element to take from this incident response cycle – not only in identifying the security measures that are in place for existing devices but in proactively designing them into new products.

One approach to designing in cybersecurity is to mirror the development processes advocated by functional-safety standards such as IEC 62304 ‘Medical device software – software life cycle processes’.

The IEC 62304 provides a common framework to develop software that full-fills requirements of addressing quality, risk and software safety throughout all aspects of the software development lifecycle.

Using a structured development lifecycle in this way not only applies best practices to the development lifecycle, but it also creates a traceable collection of artefacts that are invaluable in helping to provide a quick response should a breach occur.

Beyond the safety implications of any breach, this approach addresses the FDA recommends that any medical device must not allow sensitive data from being viewed or accessed by an unauthorised entity. The data must remain protected and accurate, preventing hackers from altering a diagnosis or important patient information.

Above: The Swiss Cheese model, illustrating how a sequence of imperfect defensive layers will only fail when those imperfections coincide

Secure code development
Compliance with the processes advocated by regulations can be demonstrated most efficiently by applying automated tools.

Although there are some differences between developing functionally safe and cyber-secure applications, there are many similarities too. For example, both perspectives benefit from the definition of appropriate requirements at the outset, and from the bidirectional traceability of those requirements to make sure that they are completely implemented.

Unit testing and dynamic analysis are equally applicable to both functional safety and cybersecurity too, and in the latter case is vital to ensure (for example) that defence mechanisms are effective, and that there is no vulnerability to attack where boundary values are applied.
IEC 62304 also requires the use of coding standards to restrict the use of the specified programming language to a safe subset. In practice, code written to be functionally safe is generally also secure because the same malpractices in programming language application often give rise to both safety and security concerns.

The growing complexity of the health delivery network

Conclusions
No connected medical system is ever going to be both useful and absolutely impenetrable. It makes sense to protect it proportionately to the level of risk involved if it were to be compromised, and that means applying multiple levels of security.

Medical devices themselves deserve particular attention because they provide the primary means to threaten. The structure development approach of a functional safety standard such as IEC 62304 can provide the ideal framework to apply a proactive approach to the development of secure applications.

Happily, many of the most appropriate quality assurance techniques for secure coding are well proven in the field of functional safety. These techniques include static analysis to ensure the appropriate application of coding standards, dynamic code coverage analysis to check for any excess “rogue code”, and the tracing of requirements throughout the development process.

The legacy of such a development process includes a structured set of artefacts that provide an ideal reference should a breach of security occur in the field. Given the dynamic nature of the endless battle between hackers and solution providers, optimising breach response times is not merely a good idea. It is a potential lifesaver.

Source: New Electronics

A new vision for AI in health-tech

 Recently established digital transformation unit, NHSX, has published a major new report – Artificial Intelligence: How to get it right. This dovetails with a £250 million investment in a new NHS AI lab to be run…

 Read More

…in collaboration between the Accelerated Access Collaborative and NHSX.

The report explores the roll-out of AI technology across the spectrum of healthcare, in:

  • diagnostics
  • knowledge generation (drug discovery, pattern recognition, etc)
  • public health (screening programmes and epidemiology)
  • system efficiency
  • precision medicine (also called P4 – predictive, preventive and personalised and participatory)

Organisations like Genomics England, with its resource of over 100,000 genomes and over 2.5 billion clinical data points offer an unparalleled opportunity to make advances in cancer diagnosis and treatment and rare disease analysis. Currently, diagnosis and screening are the major areas of application for AI, with over 130 products targeting over 70 different conditions under development. Achieving the benefits of AI throughout all potential areas of application is not guaranteed. The report identifies the major challenges and the work being done to address them.

An evidence-based approach

Importantly, the report focuses on real-world evaluation and evidence collection, as the best way to promote acceptance among both clinicians and patients.

Demonstrating the effectiveness of AI-based tools will be an important part of achieving widespread trust and adoption. A collaborative effort between NHSX, Public Health England, MedCity and health technology assessor NICE has produced an evidence standards framework for digital health technologies. This work will be taken forward by NICE with a pilot evaluation programme.

Regulatory uncertainty

The report highlights a confusing jungle of different regulators that have some involvement in the development of a health-tech innovation from concept to clinic. It is often not clear which body has oversight of which area of regulation – and indeed, some areas are not overseen at all (the quality of data used to train algorithms, for example). It is unsurprising that developers find the pathway difficult to navigate. The new NHS AI Lab will take on the task of helping to forge a clear pathway for innovators.

Many developers lack awareness of the NHS Code of Conduct for Data-Driven Health and Care Technology and the regulatory approval pathways they may need to follow. For example, half of the developers surveyed in a State of the Nation analysis had no intention to obtain certification of their technology as a medical device when this is often likely to be required.

International coordination

Internationally, development is outpacing the ethical and regulatory framework. A survey of members of the Global Digital Health Partnership highlights the way policy and regulation is trailing behind the development of AI applications in health and care. Regulators are working to catch up, but there is much more to do. Development of law and regulation in a piecemeal way, country by country, is likely to hold back development. The report highlights the efforts to achieve international coordination, such as Artificial Intelligence for Health (FG-AI4H) – a focus group set up by the World Health Organisation and the International Telecommunications Union, to work towards a standardised assessment framework for the evaluation of AI-based digital health technologies.

Where next?

Embracing the potential of AI to build and strengthen the NHS’s health tech offering will be an important next phase for patient care, but there are risks and challenges to be overcome. Addressing this head-on is welcome. The future of the NHS AI Lab will depend on political developments in the UK, but a continuation of the current policy direction promises real opportunities for both healthcare providers and technology developers

By Isabel Teare, Senior Legal Adviser

Source: Mills & Reeve

The role of a Software Architect

Although there is no exact and shared definition of what is the service of software architecture, I like to compare it with an architecture of buildings. In the sense that an architect normally has a big picture vision…

 Read More

…defining the discipline, setting priorities and steps. And in this article, we will look at the role of a software architect in software development projects.     

The main role of a software architect

  • The responsibility of being the “guardian of the vision”, in the sense that software architect must have and share a technical vision and technical direction, plan based on the requirements of the project.
  • On the other hand, software architect should know the disciplines he or she will use to build the system, for example, development environment or estimates or part of DevOps and even the basic methodologies (DDD, Continuous Integration, TDD … all good practices )
  • Should be able to transmit his or her knowledge to the team members, both vision, and disciplines.

Importance of software documentation

I still think that documentation in software development is crucial. It really helps to share the vision and make it clear for everyone in a team why certain technical decisions were made.

Importance of having a software architect in a software team

Based on my experience, I think that in general, software architecture decisions are critical, in the sense that a wrong decision can generate a lot of problems in terms of money and time. And vice versa good software architecture decisions help a team build working software thinking about scalability, performance and cost reduction.

Again, a good software architect can solve problems that the company was not able to solve during several years. And a bad one or a team without any software architect can turn a project of 2 weeks in a project of 1 year. Let me insist, not all “software architects” are good. I would say there are very few good ones in the world, but now software development teams start to understand the importance of having one and this field is developing quite fast. And proven by experience that 90% of critical software decisions are taken right by software architects. Moreover, he or she can improve the efficiency of a team, setting the right goals and principles.

The difference between a software architect and a software developer

I believe that a software architect should be a software developer, a good software developer. Software architect should not have pauses in writing code. And only gaining solid experience, working for several projects and achieving notable results a developer can evolve into an architect.

Let’s look at it more in details:

Software architect – it is not just a title, it is a way of thinking. The architect thinks mathematically or in other words, you can call it rational thinking. An architect takes into account a set of options and objectives and comes up with the optimal decision that makes a difference. He or she is responsible not only for the current sprint but for the whole project, thinking about the maintenance as well.

Software developer – normally a developer makes a specific decision at a specific time within his responsibilities.

To sum it up, let me highlight my thought that “software architect” is a mental state, a way of thinking, not a diploma. Software architect thinks about the system as a whole and analyzes it even at a macro level.

By CHRISTIAN CICERI

Source: APIUMHU

The AAC – nurturing innovation through NHS/industry collaboration

Collaboration between the NHS and industry was identified in the 2017 Life sciences: industrial strategy report as a powerful tool to support innovation. The role of the NHS both as a monopoly purchaser and a testbed for novel products means that collaboration with innovators can be a powerful driver of progress.

 Read More

Now the UK Government has announced an expanded role for NHS-industry collaboration in promoting innovations that will bring major benefits rapidly to NHS patients.

The Accelerated Access Review and its evolution into the Accelerated Access Collaborative

The Accelerated Access Review was set up in 2014 to see how access to innovative drugs, devices, diagnostics and digital products to NHS patients could be improved. A 2016 report on this initial project called for sustained focus and engagement to reap the full benefits of this collaborative approach.  Government renewed its commitment to working collaboratively in its 2017 Life Sciences Sector Deal. This set out plans to build on the AAR with a new Accelerated Access Collaborative (AAC), and £86 million of public funds to support innovators and the NHS in bringing forward innovative technologies.

An expanded role for the AAC

The Accelerated Access Collaborative was established in 2018 to provide a streamlined process through clinical development and regulatory approval for selected medicines, devices, diagnostic tools and digital services. Twelve diverse products have already been identified for rapid uptake. These range from a novel treatment for relapsing-remitting multiple sclerosis to diagnostic tests to detect pre-eclampsia and a novel system for treating benign prostatic hyperplasia.

The programme is now being expanded to become the main point of entry to the NHS for health innovations. This will include:

  • identifying the best new innovations
  • providing a single point of support for innovators whether within or outside the NHS
  • signalling the needs of clinicians and patients to innovators
  • establishing a testing infrastructure to generate evidence of effectiveness
  • directing funding to areas of greatest impact
  • supporting rapid take-up of proven innovations.

The remit of the AAC will be expanded under the chairmanship of Professor Lord Darzi so that it becomes the umbrella body across the UK health innovation eco-system. The new body will provide more joined-up support for innovators and a strategic approach to nurturing innovation. Dr Sam Roberts will take on the leadership of this project, alongside her current role as Director of Innovation and Life Sciences at NHS England and NHS Improvement. Voices representing the industry on the AAC board will include the ABPI and the BioIndustry Association.

This new programme, with its focused support and access to clinical evaluation within the NHS, offers a real opportunity to pharmaceutical and health technology innovators to fast-track promising products and services.

 

By Isabel Teare, Senior Legal Adviser

Source: Mills & Reeve

Strengthening Operational Resilience

How a leading UK high street bank is learning from the manufacturing industry to prevent operational disruptions to critical services and better protect customers

 Read More

 It was our pleasure to speak with the Strategic Delivery Lead, Service Delivery and Operational Resilience Centre of Excellence at one of the UK’s leading high street banks (henceforth known as BANK A).

Operational resilience refers to “the ability to prevent, respond to, recover and learn from operational disruptions to services, to survive and prosper, and not cause harm to customers and the wider market” (Bank of England definition). As a result of a 2018 discussion paper and a follow up 2019 consultation paper both produced jointly by the United Kingdom’s banking regulators Bank of England, the Financial Conduct Authority (FCA), and Prudential Regulation Authority (PRA), an industry-wide consortium is collaborating to address the ‘step change’ requested by the Bank of England. The consortium’s objective is to develop a regime of operational resilience disciplines to meet the regulatory bodies’ key concerns, namely to strengthen protection for customers and the economy and ensure institutions can withstand external threats and disruptions. This also addresses the major points raised in the UK’s Parliamentary Treasury Select Committee report “IT Failures in the Financial Services Sector.”

They recognized that customers are increasingly expected to use digital services yet these services are being significantly disrupted by IT failure. This results in customers being unable to make payments or withdraw cash and small businesses are left unable to conduct basic services to run their businesses. According to the Committee, the current levels and frequency of harm to customers is unacceptable. As a result, firms need to identify, stress test and set their own maximum impact tolerance levels of disruption and duration for important business services.

In researching and to address the ‘step change’ required, BANK A took inspiration from the manufacturing industry which uses a 3D modelling approach. Dassault Systèmes was invited to explore and develop a solution, bringing their digitalization, 3D modelling, and engineering heritage gained in manufacturing and regulated industries like aerospace. The following interview with BANK A provides an initial introduction to this project which could have a profound and positive impact on the financial services industry globally.

What are the key industry challenges?

The financial services industry has had solid and rigid procedures, processes and branch networks in place, built up over decades. Today’s digitally-savvy banking customers demand immediacy, ubiquity, and mobile connectivity for their financial needs. Banks can’t do what they did previously otherwise before long, they could end up in the industrial graveyard along with companies like Nokia and Kodak.

The power of technology has enabled banks to address these changes bringing opportunities such as mobile banking and chatbots. Change is one of Financial Services biggest causes of disruption such as cyber-attacks and loss of critical services. This becomes more of a challenge given the use of existing manual processes and static process maps that are used in financial services. These are expensive to maintain and can be inaccurate. There is more than one “golden source” of data, meaning that data will derive from a single place but can be extracted in different ways or duplicated across multiple systems. The problem is further compounded by the complexity of identifying the right data sources to power analysis and decision making. How do we stay on top of operational resilience, security and sustainable revenue growth in an industry that’s been set up for stability and rigidity and still keep up with the inevitable tempo of change? It’s like turning the Titanic but with the speed of a speedboat. It’s a fundamental challenge. Technology provides opportunities to also deliver robust risk management. To address the challenges, we need to put in place stronger operational resilience capabilities with a continuous review of processes and procedures to support that. This way we’re able to deliver business service continuity more effectively and efficiently to customers and keep them safe.

What role does the operational resilience function play in product development to address disruption and how does it drive innovation at BANK A?

Traditionally the role of operational resilience has been seen as a risk function. At BANK A, while it is an essential element of risk management, we take a holistic view of the function. We see it as a transformational one that helps strengthen trust and keeps our customers, the economy, and the bank safe. It enables us to see the bigger picture. As a by-product, it helps us better run the business, and opens up the opportunity to drive and implement operational and customer-centric product innovations. This allows us to keep delivering products and services, irrespective of any potential disruption. We enable change at pace without disrupting our customers’ ability to access services and products, protecting them, the bank and the economy.

The operational resilience function is viewed as a vital one and the department is immersed into the product and service development process from beginning to end. Change is the biggest known risk of failure. By including the function in operational management, it ensures we have the relevant resource and attention to achieve operational resilience and address our fundamental goal of protecting customers and safeguarding the bank and the country’s economy. The service delivery and operational resilience department sets the policies that new products and services must meet. By having a seat at the table, we assure policy compliance. This is important because as our products and services evolve, the policies also must evolve to achieve the desired business outcomes and provide even better protection and services for our customers.

Do you have any programs/initiatives in place to address these issues?

We realized technology can simulate the impact of changes to policies, processes, and products before we implement them, so we can fundamentally understand organizational performance in real-time. By doing the right thing, we’re significantly strengthening product service and delivery, strengthening trust in the bank and the banking sector as well as reducing costs.

Our quest to find a more innovative approach led us to Dassault Systèmes which helps clients in other industries simulate, visualize, and optimize business operations using their 3DEXPERIENCE platform and digital twins. For example, we discussed their “Virtual Singapore” project, a 3D twin of the city, that allows stakeholders to experience and interact with the virtual city so they can understand first-hand how it works and evaluate how it can be improved. This example, and others provided by Dassault Systèmes, convinced us that their approach could be applied to any organization, including to a bank, using a virtual representation on the 3DEXPERIENCE platform. We had C-level backing within BANK A to help make this happen, an important support for such a wide-ranging and strategically innovative project.

Virtual simulation can provide the regulators with a holistic view of the financial services industry. It enables them to understand the big picture and how all institutions are operating, assisting them to work towards the key objectives set by the Treasury Select Committee. In parallel, 3D modelling enables institutions internally to better protect the customers and demonstrate we’re meeting our own—and the regulators’—objectives. Dassault Systèmes’ experience in manufacturing brings us this capability, enabling us to connect the dots, see the big picture, and visualize the problems we’re solving.

Another major benefit of our work with Dassault Systèmes is that we need to prove what we are doing is the best approach. We need to stress test to obtain better insights from impact analysis and learn from the potential impacts across the whole system. In other industries, 3D modelling impact analysis is used as the standard to guarantee digital continuity. For example, in the case of an aeroplane, when an engine fails, engineers are certain another engine can take over because they have stress tested with a digital twin. In financial services, the use of a digital twin based on live data enables us to conduct stress testing and run simulations more frequently without causing any impact. This gives us the confidence to test a live system. The same approach used in aerospace should apply to financial services, given the need to protect the person on the street and allow him/her to access banking services and go about his/her daily life.

Dassault Systèmes’ value engagement (VE) model is being implemented to define and scope the project to find a solution. Together, we identified a scope within one of BANK A’s key services to prove the concept. Using our agile methodology, we are partnering to assess the current situation ‘as is’, identify key challenges and inefficiencies, and to ascertain how technology can help to achieve the ultimate goal of strengthening operational resilience to better protect our customers.

The model helps us to gather dispersed information from across the organization to support new capabilities. Part of the VE model involves using real processes, procedures and data to assess whether the Dassault Systèmes approach, could be instrumental in helping the regulators and the banks overcome operational resilience issues. It also helps identify technology used in other industries and how it could be adapted for financial services.

Due to the complexity of the business, it’s always a challenge to make one individual or organization accountable for an end-to-end service. This is difficult because you need access to the right data showing that the relevant actions are being taken inside the organization, let alone by external participants that play a key role in that product or service.

Using a solution on Dassault Systèmes’ 3DEXPERIENCE platform helps us see problems as they arise. These simulation capabilities help us assess new situations to see what we should be prepared for. For example, if a cyberattack affects a particular Windows build, what parts of our infrastructure are most vulnerable and what products and services would be affected?

We are looking to automate our processes so that we can deliver change faster and more effectively. We use agile development to help match the pace the business needs and we are also employing continuous deployment to ensure that employees and customers can access up-to-date capabilities as quickly as possible. Using this new platform will also help us introduce new decision making technologies like artificial intelligence. But we have to have the right data.

What benefits/positive aspects do you expect service continuity to bring?

The fundamental benefit of service continuity is that we are better placed to safeguard and protect our customers. It enables us to continue delivering indispensable services so customers can make essential purchases and live their daily lives. It enables us to eliminate the impact of negative consequences from external threats or disruptions. And by adapting Dassault Systèmes’ 3D modelling experience in other industries, we’re able to better visualize and conduct stress testing in financial services before incidents arise. An additional benefit is that we can also eliminate the risk of harm to an institution, preventing failure and an inability to meet shareholder obligations. As a consequence of keeping our customers safe, we’re more efficient and better positioned to make stronger business decisions to support sustainable growth. This then has a positive impact on the economy. 

By Dassault Systèmes

Source: CIMdata blog

Contact Us

Company Number: 09017439

Follow, Talk, Share