AI Will Take Your Job!? A Deeper Dive Beyond the Headlines

That’s right, unless you live under a rock, you can’t escape this rhetoric, but is it true? The media certainly would like to make you think so, wouldn’t they? I’m a bit less emphatic about the inevitable doomsday when the gap of human skill and machine capability dissolves. Candidly, I’m getting annoyed with said headlines, as it strikes me as alarmist and lazy. There are so many facets of human intelligence and capability that we haven’t been able to begin to understand, how are machines going to be able to go beyond that level of understanding when we are the ones who are creating machines in the first place? But hey, they need to sell ad space and headlines somehow, right? Why not send waves of concern through the uninformed to provide oodles of opportunity to publish article after article of paranoia because suddenly AI can predict what article to write or image to generate based on troves of data that it can mimic derived from a prompt. It’s a neat trick for sure, and lets our creations come to life much quicker, but can we slow down a bit on how we’ll all lose our jobs in the next five years? Please, don’t take just my word for it though, let’s look at history, and reality, before we get the pitchforks and torches… or just accept our fate without some logic.

The History of Innovation

Throughout history, every major innovation has been met with fear and skepticism. The Ottoman Empire’s approach to the printing press, particularly in the context of Arabic script, is a notable example of how technological adoption can be influenced by cultural, religious, and political factors. The printing press, invented in the 15th century by Johannes Gutenberg in Europe, revolutionized the spread of knowledge, but its reception in the Ottoman Empire was markedly different. One of the primary reasons for the resistance to the printing press in the Ottoman Empire was religious and cultural. The Islamic scholars and calligraphers held significant social and political power, and they viewed the printing press as a threat to their status and the traditional methods of reproducing texts, particularly the Quran. In 1485, Sultan Bayezid II issued a decree that effectively banned the printing of Arabic scripts. This ban lasted for over three centuries, with some exceptions made for non-Muslim communities in the empire. Jews and Christians were allowed to print in their languages (Hebrew, Armenian, Greek, etc.) earlier on, but the printing of Arabic scripts by Muslims remained heavily restricted until the 18th century.

The First Industrial Revolution, which began in the late 18th century, reshaped the world, leading to significant socio-economic shifts. Many feared that machinery would replace human labor, rendering workers obsolete. However, history shows us that while technology did displace some jobs, it also created new ones, increased productivity, led to economic growth and improved living standards overall.

The Second Industrial Revolution, occurring between the late 19th and early 20th centuries, further accelerated technological advancement. It introduced mass production, telecommunications, and transportation innovations such as the railroad and the steamship, connecting the world in unprecedented ways. Like its predecessor, this period faced its share of apprehensions; the rapid urbanization and changes in employment patterns sparked debates and adjustments in societal structures. However, it also led to a surge in economic growth, job creation, and the birth of new industries, illustrating the complex relationship between technological progress and workforce transformation.

Similarly, the introduction of computers and the internet in the 20th century during the Third Industrial Revolution transformed industries and economies. Critics once predicted widespread job losses, yet these technologies led to the creation of entirely new sectors, such as software development, digital marketing, and e-commerce, proving that innovation often opens more doors than it closes.

Understanding AI’s Potential Impact

As we stand on the brink of the AI revolution, it’s essential to approach the discourse with a balanced perspective. AI, like its predecessors, will undoubtedly transform the job market. Some roles will become obsolete, but new ones will emerge in their stead, particularly in AI development, data analysis, and cybersecurity, to mention a few. Further, the rate of advancement will be dictated by the level of adoption by the masses, and when we look at how long it took for us to adopt personal computers as a standard household appliance or business tool, the rate of growth of AI is debatable. The costs and applications will be significant, early adopters will love what it can do, but we’ve seen the amount of interaction with the leading AI tools in the market drop after the initial excitement experienced. This can be credited toward the additional work required even after something has been developed, the tools producing inaccurate information and the lack of consistent everyday use due to it being a new activity we have to remind ourselves to perform.

Moreover, AI has the potential to enhance human capabilities rather than replace them outright. It can automate tedious and repetitive tasks, allowing humans to focus on creative, strategic, and interpersonal activities that machines cannot replicate. This synergy between human intelligence and artificial intelligence could lead to unprecedented levels of productivity and innovation.

Navigating the Future with AI

The key to leveraging AI’s potential benefits while mitigating its risks lies in adaptation and education. As a society, we must prioritize lifelong learning and re-skilling to prepare the workforce for the changing landscape. Certainly governments, educational institutions, and businesses need to collaborate on creating pathways for individuals to transition into emerging fields, but even more, humans need to adopt a growth mindset and be willing to do the extra work.

Furthermore, ethical considerations must be at the forefront of AI development and is a top concern from a regulatory standpoint. Ensuring that AI is used to enhance the human experience rather than diminish it requires careful regulation, education and oversight. By setting clear guidelines and fostering an environment of responsible innovation, we can harness the power of AI to solve complex problems and improve the quality of life for all.

Conclusion

The narrative that AI will unequivocally take your job is an oversimplification of a much more complex and dynamic reality. History teaches us that innovation can be a tide that lifts all boats, provided we navigate it with foresight and inclusivity. Instead of succumbing to fear-mongering, let’s embrace the opportunities AI presents to create a future where technology and humanity advance hand in hand.

As we continue to explore the uncharted territories of AI, it’s crucial to remember that we are the architects of this future. By fostering a culture of innovation, ethical responsibility, and lifelong learning, we can ensure that AI becomes a tool for empowerment rather than a source of displacement.

The Easy Path to AI

The explosion of interest in AI due to the recent success of ChatGPT, the state-of-the-art natural language generation model that can write anything from essays to poems to code, is no surprise. However, now we are starting to see the excitement wane as ChatGPT usage numbers drop. This could be due to competition, concerns about privacy and security, or the overall excitement factor slowing down as users struggle to find uses of the tool. Further, to use the API available from OpenAI, you need a lot of technical skills and resources to train, fine-tune, and deploy it. You also need to be careful about the quality and safety of the generated text, as it might contain errors, biases, or harmful content.

The good news? This is just one tool in a sea of many other AI tools that are refined and purpose-built for many organizational needs. At the top of that list of tools is Microsoft’s Azure Cognitive Services tools, a collection of cloud-based APIs that provide ready-made AI solutions for various scenarios. Anyone who is familiar with Data Science and Machine Learning knows that we need troves of clean and trustworthy data to train an ML model to be able to predict results. The beauty of Cognitive Services is that Microsoft has already built these models around many categories, and even won many “human-parity” awards! Below are just a few examples of how Cognitive Services can help you with your AI needs:

• Speech Recognition: This service allows you to convert speech to text in real time or from audio files. You can use it for voice commands, transcription, dictation, captioning, and more. You can also customize it with your own vocabulary and acoustic machine learning models.
• Computer Vision: This service allows you to analyze and understand images and videos. You can use it for face detection, emotion recognition, object detection, optical character recognition, video indexing, and more. You can also create your own custom vision models using a simple interface. I recently created a video with an overview of the service here: https://youtu.be/ac8fvBWgUHg
• Text Analytics: This service allows you to extract insights from text data. You can use it for sentiment analysis, key phrase extraction, entity recognition, language detection, and more. Another example would be to use it to analyze healthcare documents and extract clinical information.
• Many more: Cognitive Services offer a wide range of services for different domains and scenarios, such as natural language understanding, conversational AI, anomaly detection, spatial analysis, personalization, and more.

You don’t need to worry about building or managing your own AI models, or if required, many of the services allow for custom models to be built as well. Once determined, you just need to connect to the API and start using it in your applications. Even better, many of the services can be containerized within a docker container where you can deploy the models locally or in other clouds for an even faster prediction for your application. Finally, you also get the benefits of Microsoft’s expertise and innovation in AI, such as high accuracy, reliability, security, and compliance.

To get started, many of the services have free service tiers for minimal transactions, and each service is billed based on consumption, so as long as you are controlling those transactions, you don’t have to worry about cost overruns, etc.

So, what are you waiting for? If you want to add AI capabilities to your applications without the hassle and complexity of ChatGPT and similar tools, Cognitive Services are the way to go! And if you really want to go deeper in understanding all the capabilities check out our recent book “Practical Guide to Azure Cognitive Services” from Packt or through other online book retailers: https://bit.ly/44NKm04

If you think this was valuable, or could improve, please leave a comment below and share it with your friends. And don’t forget to subscribe to my video blog https://youtube.com/bizdataviz for more Data and AI insights and tips.

3 Key Differences in ChatGPT and Azure OpenAI

In this vLog I discuss some misconceptions around ChatGPT and Azure OpenAI, to include:

  • Who owns ChatGPT, OpenAI, and how Microsoft got involved
  • Security and privacy concerns about Azure OpenAI and ChatGPT
  • How each of the services is consumed and billed

Take a look to find out more!

 ChatGPTAzure OpenAI
OwnershipOwned by OpenAI LP a for profit arm of OpenAI the non-profit who’s mission is to development of societyPart of Azure AI offerings as APIs, and investor in OpenAI for exclusive rights to technology generated
SecurityInsecure and open to the public. LLM is trained on dataset created in GPT 3 and currently only references data from 2021 and earlier. Questions and interactions are captured and can be used for further trainingSecure to an Azure tenant using GPT-4, GPT 3.5, Codex and Dall-e Requires access for tenant to reduce the chances of the AI to be used for malicious purposes Data is not stored by the prompt, and model is not trained on data added
CostsFree during preview stages, or paid for better hardware availabilityBased on a consumption model like other Azure Cognitive Services. Biggest expense is re-training the model you’ve deployed for your data

Here comes your CoPilot

The new age of large language models (LLMs), and the ability to accelerate various forms of novel thought is being cast upon us at a rapid pace. Just like an airplane copilot, we are seeing an explosion of tools in various areas to help us do our everyday jobs, making us more productive and freeing up additional time to enhance our creativity… Or play candy crush.

You have already seen a plethora of announcements from Microsoft about various copilot tools that are being added to their Office productivity suite to assist the common office worker, GitHub CoPilot for assisting the software developer with writing, analyzing, and documenting code, the data analyst using Power BI and Microsoft Fabric for simplifying the analysis and report building process that could be tedious, and this is just the beginning from their standpoint. They’ve also announced the AI CoPilot software development kit that allows developers to add a CoPilot to any number of applications that are used throughout the business and consumer worlds for assisting people with their everyday tasks and simplifying the process by which they were able to develop and create new pieces.

The real question that comes to mind, however, is “who gets credit for the work that gets created?”. When we see situations such as in the entertainment industry, where movie scripts are being created by these tools, thousands of songs have been recently removed from Spotify due to the fact that they were generated with AI, images and videos are being developed and manipulated with AI, this question comes to mind frequently. And this is just the beginning of what I anticipate will be a massive explosion of questions around who really should get the credit for what’s being created. If AI is helping individuals complete their work at a faster pace, and the broader community is benefiting as a result, does it really matter? If I read 10 articles on copilot, and I am able to retain a significant portion of what I read, then turn around and form an opinion, and write my own article, such as I’m doing here about how I see things happening, is that still my work? Is it my work even though I am writing it based on a whole bunch of other material that others produced, and I’m summarizing in a slightly different way? This is the process by which the majority of research has been based for centuries now, as well as many fiction and non-fiction works have been created. Is that really different when we look at the technology that underlies LLMs?

In the world of data science, we can see tremendous opportunity to take advantage of already created machine learning models based on the algorithms that were used to be able to then replicate the findings for any number of various data sets and opportunities to create new algorithms and predictive models. Is this somehow “cheating” suddenly? Are these data scientists working, hopefully, towards the greater good, and having the novel inspiration for what they want to build, but are using these tools to help them produce it more quickly, cheaters? I think these are the questions we need to be asking ourselves rather than pointing fingers at who the people are using the tools to produce the work that they are producing.

The other major concern coming from all of this are the privacy and security implications of training these LLMs with information that ultimately should not be shared. Microsoft is providing excellent options with regard to these concerns by allowing customers to create their own instance of the various tools provided, such as ChatGPT or Dall-E APIs, that allow their customers to isolate the models that are trained specifically for them in their own Azure Subscription and are not using those individual models or data that is collected to train any of the other models. Using tools such as Google’s, Bard or OpenAI’s ChatGPT interface, you do not have the same luxury, and those models are being re-trained by all the data that is fed into them. This very example was made loudly public recently when some engineers from Samsung fed some of their data into ChatGPT, exposing corporate secrets unknowingly. This is also causing rash decisions by CxOs to widely ban the tools that are helping their employees be more productive. Clearly more education is needed for helping with these decisions and scenarios at all levels of corporations, education, and individuals across the board.

As a writer myself, recently just finishing publishing my first book, the days of writer’s block and challenges with getting started on various topics in chapters, still haunt me. I see these tools as an opportunity to help us get past that and produce work that much more quickly. Truthfully, the answer here is subjective to each person’s opinion, similarly to how a person feels about a painting, song, or written piece. We already base so many of our works of art, whether in the form of an application, methodology, algorithm, and more traditional artistic stylings on the knowledge we’ve acquired through experience, that the notion of “original thought” is now so uncommon, and even when introduced, rejected by the greater society, how is this any different? I say we take advantage of the tools we are given and make the copilot as pervasive as possible to help gain efficiencies in every aspect of the modern world!

Azure Cognitive Services

AI solutions are exploding, and Azure has the most complete offering of any cloud provider! Watch this video to get started with our API based Cognitive Services in Azure and a sample architecture of how to employ them with the Azure Bot Service. Azure Cognitive Services are cloud-based services with REST APIs and client library SDKs available to help you build cognitive intelligence into your applications.

You can add cognitive features to your applications without having artificial intelligence (AI) or data science skills. Azure Cognitive Services comprise various AI services that enable you to build cognitive solutions that can see, hear, speak, understand, and even make decisions. Azure Bot Service enables you to build intelligent, enterprise-grade bots with ownership and control of your data. Begin with a simple Q&A bot or build a sophisticated virtual assistant.

https://docs.microsoft.com/en-us/azure/cognitive-services/what-are-cognitive-services

https://docs.microsoft.com/en-us/azure/cognitive-services/cognitive-services-apis-create-account?tabs=multiservice%2Cwindows

https://dev.botframework.com/

#Azure #AI #CognitiveServices #ArtificialIntelligence #Bots #ReferenceArchitecture #MachineLearning #API #Cloud #Data #DataScience

What is HTAP in Azure

Hybrid Transactional and Analytical Processing, or HTAP, is an advanced database capability that allows for both types of workloads to be performed without one impacting the performance of the other.

In this Video Blog, I cover some of the history of HTAP, some of the challenges and benefits of these systems, and where you can find them in Azure.

Overview of Azure Synapse Link featuring CosmosDB

Azure Synapse Link allows you to connect to your transactional system directly to run analytical and machine learning workloads while eliminating the need for ETL/ELT, batch processing and reload wait times.

In this vLog, I explain how to turn the capability to use Link on in CosmosDB, and what’s happening under the covers to give access to that analytical workload without impacting the performance of your transactional processing system.

Check it out here and let me know what you think!

Getting started with Spark Pools in Azure Synapse

In my latest video blog I discuss getting started on the newly Generally Available Spark Pools as a part of Azure Synapse, another great option for Data Engineering/Preparation, Data Exploration, and Machine learning workloads

Without going too deep into the history of Apache Spark, I’ll start with the basics. Essentially, in the early days of Big Data workloads, a basis for machine learning and deep learning for advanced analytics and AI, we would use a Hadoop cluster and move all these datasets across disks, but the disks were always the bottleneck in the process. So, the creators of Spark said hey, why don’t we do this in memory and remove that bottleneck. So they developed Apache Spark as an in memory data processing engine as a faster way to process these massive datasets.

When the Azure Synapse team wanted to make sure that they were offering the best possible data solution for all different kinds of workloads, Spark gave the ability to have an option for their customers that were already familiar with the Spark environment, and included this feature as part of the complete Azure Synapse Analytics offering.

Behind the scenes, the Synapse team is managing many of the components you’d find in Open-Sourced Spark such as:

  • Apache Hadoop Yarn – for the management of the clusters where the data is being processed
  • Apache Livy – for the job orchestration
  • Anaconda – a package manager, environment manager, Python/R data science distribution and a collection of over 7500 open source packages for increasing the capabilities of the Spark clusters

I hope you enjoy the post. Let me know your thoughts or questions!

Connecting to External Data with Azure Synapse

In my latest video blog I discuss and demonstrate some of the ways to connect to external data in Azure Synapse if there isn’t a need to import the data to the database or you want to do some ad-hoc analysis. I also talk about using COPY and CTAS statements if the requirement is to import the data after all. Check it out here

Comparing Azure Synapse, Snowflake, and Databricks for common data workloads

In this vLog post I discuss how Azure Synapse, Databricks and Snowflake compare when it comes to common data workloads:

Data Science

Business Intelligence

Ad-Hoc data analysis

Data Warehousing

and more!