My Transformative Journey through a 3-Month #devops Course


Embarking on a three-month-long DevOps course has been an enlightening and transformative experience for me. While I have successfully completed the course and actively participated in few projects, I firmly believe that there is still a vast amount to learn and explore in the world of #devops . At its core, DevOps is built upon the principles of continuous integration and continuous development, with a crucial emphasis on continuous learning. With a multitude of tools and technologies available, it becomes essential to understand the “why” and “what” behind their implementation. In this blog post, I will share my journey and the key tools and concepts I encountered during the course.

Exploring the Tools and Technologies:

Throughout my DevOps course, I delved into a variety of tools and technologies that form the backbone of this field. Here are some notable ones:

RHEL / Bash Scripting (Operating System):

Understanding the fundamentals of operating systems, particularly Red Hat Enterprise Linux (RHEL), and acquiring proficiency in Bash scripting provided a strong foundation for my DevOps knowledge. These skills are indispensable for effectively managing and automating tasks within the DevOps workflow.

Ansible (Configuration Management Tool):

As a powerful configuration management tool, Ansible enabled me to automate the provisioning, configuration, and deployment of infrastructure. Its simplicity and agentless architecture made it an ideal choice for managing large-scale environments efficiently.

AWS (Cloud Computing):

Cloud computing lies at the heart of modern IT infrastructure, and Amazon Web Services (AWS) is a leading provider in this domain. Through hands-on experience with AWS, I gained insights into deploying, scaling, and managing applications in the cloud.

Terraform (Infrastructure as Code):

Infrastructure as code revolutionizes infrastructure management, and Terraform emerged as a widely adopted tool in this space. I learned how to define and deploy infrastructure resources programmatically, ensuring consistency and scalability.

Jenkins (Continuous Integration Tool):

Continuous integration is a crucial practice in DevOps, and Jenkins played a vital role in automating build, test, and deployment processes. By utilizing Jenkins, I experienced the benefits of collaborative development and reduced manual efforts.

Prometheus & Grafana (Monitoring Tools):

Effective monitoring is essential for maintaining the health and performance of applications and infrastructure. Tools like Prometheus and Grafana provided me with valuable insights into monitoring and visualization techniques, enabling me to identify bottlenecks and optimize performance.

ELK (Log Aggregation):

Log aggregation plays a significant role in troubleshooting and analyzing system behavior. Through the ELK stack (Elasticsearch, Logstash, and Kibana), I learned how to centralize, index, and visualize logs effectively, improving observability within the DevOps workflow.

Azure DevOps (SaaS Platform):

Azure DevOps, a comprehensive software-as-a-service (SaaS) platform by Microsoft, offers an end-to-end toolchain for developing and deploying software. Exploring Azure DevOps broadened my understanding of the DevOps lifecycle and its seamless integration with various tools and services.

Deployment Strategies:

Understanding different deployment strategies, such as blue-green deployments, became crucial for ensuring reliable and resilient applications. During the course, I learned about these strategies and their implementation to minimize downtime and ensure smooth releases.

Docker & Kubernetes (Containerization & Orchestration):

Containers and orchestration are essential aspects of modern application development and deployment. Through Docker and Kubernetes, I gained hands-on experience in building, packaging, and deploying applications within a containerized environment, while effectively managing their orchestration.

Embracing Agile Principles:

In addition to the vast array of tools and technologies, my DevOps course also emphasized the importance of Agile principles. Concepts such as sprint planning

Netbackup master server upgrade fail from 8.2 to 10.1 , error JRE Failed to upgrade

Failure Logs ;


– found an EEB for NetBackup upgrade from 8.1.2 to 8.3 for similar issue. – stopped Netbackup services and tried upgrading NetBackup to 8.3 and it was successful. – Further we tried to proceed for NetBackup upgrade to 10.2 and it was giving db related error. Since the relational database is migrated in NBU 10.2 we need the RDBMS service up and running during upgrade to 10.2 , started the same. Finally, the NetBackup was upgraded successfully to 10.2 and Catalog backup was successfully completed as well

Top .NEXT News

Nutanix Announces a vision for Platform-as-a-Service solutions and new capabilities for the hybrid multi-cloud

During its annual.NEXT conference, Nutanix, the leading provider of hybrid multi-cloud infrastructure solutions, unveiled a series of new product offerings and an ambitious vision to help customers streamline their operations across distributed environments and accelerate the deployment of modern applications.

Project Beacon

Nutanix announced Project Beacon, a multi-year effort to deliver a portfolio of data-centric Platform-as-a-Service (PaaS) level services available natively anywhere – including on Nutanix or the native public cloud. With a vision of decoupling the application and its data from the underlying infrastructure, Project Beacon aims to enable developers to build applications once and run them anywhere.

As the initial phase of Project Beacon, Nutanix will expand the capabilities and benefits of Nutanix Database Service (NDB) to the public cloud as a managed service, providing the same database automation and management experience already available on Nutanix Cloud Infrastructure (NCI). The company plans to subsequently extend this effort to other popular data-centric platform services.

Nutanix Central

The company also announced Nutanix Central, a cloud-delivered solution that provides a single console for visibility, monitoring and management across public cloud, on-premises, hosted or edge infrastructure. As organizations increasingly manage diverse and distributed environments, Nutanix Central extends the universal cloud operating model of the Nutanix Cloud Platform to break down silos and simplify managing applications and data anywhere, including integrated security and license portability. 

Nutanix Cloud Platform

Additionally, Nutanix announced new capabilities in the Nutanix Cloud Platform that enable higher performant and more secure applications and data, all managed through Nutanix Central. These enhancements include optimized database performance with a reduced total cost of ownership, as well as simplified networking and micro-segmentation capabilities across customer, partner, and hyperscaler-owned networks.

Nutanix Data Services for Kubernetes (NDK)

Nutanix has also introduced Nutanix Data Services for Kubernetes (NDK) to provide customers with scalable control over cloud-native applications and data. Initially delivered as part of Nutanix Cloud Infrastructure (NCI), NDK will bring the full power of Nutanix enterprise-class storage, snapshots, and disaster recovery (DR) to Kubernetes. This will accelerate containerized application development for stateful workloads by introducing storage provisioning, snapshots, and DR operations to Kubernetes pods and application namespaces.

Additional Data Services

Nutanix Multicloud Snapshot Technology was another notable announcement. This new technology will enable cross-cloud data mobility by allowing snapshots to be saved directly to cloud native hyperscaler’s S3 objects stores, starting with AWS S3. This will unlock hybrid multicloud data protection, recovery, and mobility use cases. 

Also, Nutanix Objects now integrates with Snowflake, allowing organizations to use Snowflake Data Cloud to analyze data directly on Nutanix Objects, ensuring data stays local and accelerates time to value.


These new products and capabilities from Nutanix will help the company deliver a seamless hybrid multicloud experience to customers, offer consistent management across endpoints, enable customers to run applications and data anywhere, and provide a comprehensive set of data-centric platform services to accelerate application development at scale.

TOP 10 SEO MISTAKES by based on recent experience


Optimizing is all about the keywords that you want your website to rank for. But are you are choosing the right ones? One of the most common mistakes in selecting keywords is neglecting the preference of search engines and users for long-tail keywords. While you might define your products and services in a certain way, it’s more important to understand what words your potential customers would use to refer to them. Sometimes the terms you consider correct might mean something completely different for other people, or could be too generic. In either case, you will be optimizing for all the wrong keywords.


Meta keywords have been Sherly neglected by a number of search engines. But even then, the website owners stuff the tag with all the keywords that their business is associated with. This, is not going to help in anyway. In fact, it will provide your competitors with the hint to the keywords that you are willing to target. So make sure you do not stuff keyword meta tags on your website. If you have, it’s time to get rid of it now!


It is quite obvious that you would want your website to be ranked for a particular keyword. However, using such keywords numerous times is not going to help. Google has hired several brilliant heads who refine the search engines to make sure that the users get what they want. So let go of stuffing the keywords and instead write words that work. Your website should have the content that is interesting to read and your products and services will get promoted automatically!


Sometimes, the ranking of your website is decided by Google on the basis of links that direct the readers from your website to another. Before few years, the website owners used to exchange links with other companies for building up their inbound link counts. But unfortunately, you cannot do it now as you might have to pay a penalty. So if you have been exchanging links just for the sake of it, the time is up for you. so please stop if you just creating back links as it is not only one thing going to help you


Now this is the last thing you should do for accomplishing SEO tasks. You also might come across several dealers who would try to sell their link to you by promising to boost your traffic. Don’t do it. In Google’s perception, your purchased links are no less than the spam emails. And you sure do not want to suffer, do you?


Gone are the days when it was necessary to ensure that the links directing to your website had the keywords you wanted to target. The trends are a bit different now. These links should have the name of your company instead.


To save on money, business owners often settle for SEO providers that are available at cheap rates. However, you must know that even when you want primary levels of service, the work will cost you around $1,000 every month. The best quality of SEO services are available at the price of $3,000 and $10,000 charged every month. So if you come across the companies that offer these services at $500 per month, think twice and choose only the best. i have experience personally with less than $500 but nothing helps , no results at all


Matt Cutts, who is Google’s head of web spam was of the opinion that guest blog posting should die by 2014. However, it seems that Google is really bothered by low quality guests posts. So no need to have low quality links on your website as they will result into lesser traffic on your website. There is no denying to the fact that Search Engine Optimization has a great potential when it comes to offering a better ROI to business owners. But it will always be wise on your part to avoid the above sins and indulge into effective SEO implementation techniques. You can join our SEO Training program to learn more about SEO and Advance SEO techniques


SEO optimization is not only about content and keywords. It’s also about the quality of your website, particularly its performance on mobile devices, which are users’ top choice today. Google and other search engines can recognize when your website is not mobile-friendly (think about the Mobilegeddon update). If you haven’t considered a smooth mobile experience for your audience, your rating on search engines can be jeopardized. The same goes for load speed, as search engines put an emphasis on that as well. You shouldn’t be surprised if a slow website leads to lower SERP. You can use online tools like Pingdom and GT Metrix to analyze where the speed problems come from and how to fix them.


Optimization has a social dimension as well. When you are sharing your content on social media, one of your main goals is to get the attention of users who have a significant online influence. This means their content gets noticed by both your target audience and by search engines. That’s why it’s important to create relationships with such ‘power users’ and to use their credibility to promote your content. Another aspect of this is submitting your blog posts or website promo to social sites such as Digg, Reddit, or Quora without having a ‘power user.’ It’s much easier to make noise about your content when the user sharing it has credibility on that network. Building relationships and doing structured outreach via influencers is an indispensable part of your optimization strategy.


before we do SEO for our site, it is very very important to understand how google search engine works, google is to provide the best search results to users so definitely google search engine machinal will not rank un appropriate content

Build a complete strategy for SEO, to have the effective SEO you can’t just rely on one option, it is the combination of multiple tasks and options

Godfather quits Google over dangers of Artificial Intelligence – saying he now regretted his work

Geoffrey Hinton, 75, announced his resignation from Google in a statement to the New York Times, saying he now regretted his work. He told the BBC some of the dangers of AI chatbots were “quite scary”. “Right now, they’re not more intelligent than us, as far as I can tell. But I think they soon maybe

key benefits of having container-based backup appliance – Veritas Flex

Software lifecycle:- upgrading the appliance has always been a time-consuming task with the help of the container platform upgrade can be done in 6 mins and 6 mins to roll back.

NetBackup provisioning :- Wizard base NetBackup provisioning, 1 min to provision including start-up, create mix version netbackup within single appliance (test purpose, you can have latest netbackup in few mins)

smaller footprint :- It supports single appliance for everything, serverless architecture

Multi-tenancy:- Multi-tenancy support, segregated netbackup domain can be created in single appliance

storage saving :- Highly scalable, fastest dedupe appliance in the market with highest dedupe ratio

How to activate Netbackup opscenter license

OpsCenter is a web-based software application that helps organizations by providing visibility into their data protection environment. By using OpsCenter, you can track the effectiveness of backup operations by generating comprehensive reports.

OpsCenter is available in the following two versions:

OpsCenterThis OpsCenter version does not require any license.OpsCenter provides single deployment configuration and user interface for monitoring, alerting, and reporting functionality.
OpsCenter AnalyticsOpsCenter Analytics is the licensed version of OpsCenter.In addition to the features available in the unlicensed OpsCenter version, Analytics offers report customization, and chargeback reporting.

open VERITAS licensing portal


activate the license



Panic Details: Crash at 2023-02-05T07:30:19.561Z on CPU 4 running world 2098384 – hclk-sched-vmnic4. VMK Uptime:422:20:42:55.138

Panic Message: @BlueScreen: #PF Exception 14 in world 2098384:hclk-sched-v IP 0x41800a037c55 addr 0x0


0x451a9ab1bbf0:[0x41800a037c55]qfle3_xmit_pkt@(qfle3)#<None>+0x1e1d stack: 0x459c633c2680, 0xbad0039, 0xbad0039, 0x4180096f7f62, 0x4180478000800x451a9ab1bdd0:[0x41800a0398aa]qfle3_uplink_tx@(qfle3)#<None>+0x8b stack: 0x430db543f790, 0x430632d89a00, 0x430632d89a00, 0x4305bcee6640, 0x10x451a9ab1be20:[0x41800985d144]UplinkDevTransmit@vmkernel#nover+0x4cd stack: 0x1, 0x0, 0x7c, 0x0, 0x451a9ab1bf300x451a9ab1bef0:[0x41800a3edb2f]NetSchedHClkRunCycle@(netsched_hclk)#<None>+0x1f8 stack: 0x1, 0x2f, 0x0, 0x4305bcee6640, 0x7c000000000x451a9ab1bfb0:[0x41800a3edd69]NetSchedHClkSchedSysWorld@(netsched_hclk)#<None>+0x8a stack: 0x451a9ab23000, 0x451a8d023100, 0x451a9ab23100, 0x418009912c1b, 0x00x451a9ab1bfe0:[0x418009912c1a]CpuSched_StartWorld@vmkernel#nover+0x77 stack: 0x0, 0x0, 0x0, 0x0, 0x0

PSOD Can Occur When Using QFLE3 Driver your current version qfle3

Qlogic has released a new driver for ESXi 6.7 and 7.0 to address this issue:

ESXi 6.7: Version

ESXi 7.0: Version

change nutanix move ip address after deployment

  1. ssh nutanix move
  2. admin@move on ~ $ rs
  3. enter password
  4. root@move on ~ $ configure-static-ip
  5. Enter the required information as shown in the following example.

Do you want to configure static IPv4 address?(y/N)
Enter Static IPv4 Address (e.g.
Enter Netmask (e.g.
Enter Gateway IP Address (e.g.
Enter DNS Server 1 IP Address (e.g.
Enter DNS Server 2 IP Address (e.g.
Enter Domain (e.g. blr.ste.lab)

6. retry the failed replication