Atmecs-Blog

Engineering change to accelerate digital transformation

Engineering Change – Accelerating Digital Transformation ATMECS – Content Team Industry 4.0 offers several possibilities to today’s modern society. It promises valuable data, lower waste, and higher visibility into the comprehensive value stream for change leaders. New tools and sources of information are available to make processes efficient and profitable. Research and Markets estimate that the digital transformation market will increase by 23% to $3.3 trillion by 2025 (CAGR). However, implementing a digital transformation needs consensus across all levels. Though change leaders are open to new technology engineering, they may have to face a lot of resistance, albeit justified, along the way. For change leaders, efficiently implementing a digital transformation requires exhibiting value and building harmony throughout the hierarchy. The process from identifying a technological need for solving a business problem to efficient deployment is long. Change leaders convert it into a reality when they state their position effectively. Future Themes for Digital Transformation Based on industry data, Gartner research and our experience enabling deep tech solutions and services for our clients, here are some common themes of the best digital transformations that will shape the future. Process Automation and Virtualization A big part of the workforce, across all industry verticals, is seeing a rise in the use of technology to enable process efficiency and promote automation. With better process automation and virtualization, companies are gearing up for the next wave of transformation in their enterprise. AI-based, self-learning robots using assisted technologies like IoT etc., present development opportunities for humans to communicate better by adding granular information. With enhanced simulations and 4D printing, development processes have sped up while improving safety and operational practices. The proliferation of emerging technologies has helped companies navigate tricky labour-related issues and redefine their work methods. Total Experience – Breaking through silo thinking Understanding the voice of the customer is a critical success factor for any business. However, the role of “customer” has expanded to include every individual a company or a brand interacts with. Total experience refers to a blend of customer experience, employee experience, and user experience to state all possibilities and experiences for both customers and employees. With customers and employees going digital, “listening” to the total experience is the next big thing to enhance and improve the sustainability of transformational initiatives. Data Fabric to Enhance Decision Intelligence Companies notice a rising requirement for deeper insights and analytics through data generated by three sources – people, machines, and organizations. Decision Intelligence and Data Fabric are two significant trends that enhance data use effectiveness while easing human-driven data handling jobs. Data fabric integrates data across platforms and users and develops a central source that allows data accessibility anywhere needed. Decision Intelligence, on the other hand, enhances organizational decision-making by implementing Artificial Intelligence and Data Analytics to develop an intelligent platform that automates and enhances AI-based decision-making. Software 2.0 With improved hardware, businesses require programs using AI structures and automation to unravel better use cases such as machine-supported simulations and autonomous vehicles to do more without needing manual interference. Software 2.0 replaces programmers with neural networks that use ML to develop software and will remain on trend as its popularity is consistently rising. No-code or low-code setups are already seeing citizen developers take on roles that sophisticated developers once held. Modularized, repurpose-able product features’ engineering will enable companies to achieve greater speed to market. Distributed Enterprise Post pandemic, most organizations have adopted a hybrid approach, making them pivot business models to meet consumer and behavioural changes through remote working. Distributed enterprises allow a distant framework to virtualize consumer touchpoints and enhance relevant technologies. A recent Gartner study states that 75% of companies that use distributed enterprises will grow better than their contemporaries by 2023. Verticals Using Cloud-Native Platforms With the rise in digitalization, companies are moving their legacy applications and work to the Cloud through ‘lift and shift’ approaches. However, that approach does not work for all and typically hits roadblocks on scaling or elasticity. Businesses are adopting cloud-native platforms instead to automate their workloads and tasks and concentrate on competitive differentiation. Technology engineering is becoming increasingly important to help companies grow, develop and reshape the world. “Every company is a technology company” – is the new paradigm for business leaders. Often, however, leaders are left wondering about the gap between innovation and adoption. This is where change leadership comes in. Change leaders with a ‘Product/Service Management’ mindset tend to envision a strategic roadmap grounded in customer centricity and undertake implementations with a ‘no-compromise’ process orientation. These change leaders are champions of industry. Here are some considerations when you, as a change leader, want to implement your next large-scale digital transformation project. Change Leader Caveats Getting Buy-In from Key Stakeholders Digital transformations don’t yield sustainable results when businesses are not aligned on processes and outcomes. Indeed, most CIOs acknowledge improper focus, inefficient executive sponsorship, and resistance to change as major hindrances to successful transformation initiatives. Change leaders articulate the vision for digital transformation in a business structure and the glue for success in transformational initiatives. From acknowledging the employee’s efforts to tracking results, they work between business processes and outcomes. A critical responsibility of change leaders is to achieve stakeholder consensus across different levels of the organization. Change leaders must understand every role in the process chain above and below the surface level. Employees will always be anxious about new technologies causing disruption and making existing processes complex. It is the job of a change leader to build trust and awareness, reduce employee workload, gain consensus, enhance efficiency and effectiveness of daily work. They draft the vision for a digital transformation and support throughout the implementation such that it gets more efficient in the long run. Sponsoring Tools and Technologies that Enable Doing More with Less Digital transformations fall short of expectations when done in a piecemeal approach. Improper or ill-timed process modifications exacerbate the problem of data silos and lead to integration issues downstream. To derive best results, change leaders should promote

Engineering change to accelerate digital transformation Read More »

Sports’ Tryst With NFTs

Sports’ Tryst With NFTs   ATMECS – Content Team The sports industry is growing rapidly. It is not just about competition in physical sport anymore but also in the digital space. The accelerated growth of partnership between sports and digital technologies over the last few years has spawned changes to the market landscape and sports tech ecosystem that were once thought unfathomable. With the proliferation of e-gaming, betting, and the rise of online broadcasting/streaming (OTT), two major trends are currently emerging in the sports industry: Non-Fungible Tokens (NFTs), which according to Gartner Hype Cycle 2021 is at the Peak of Inflated Expectations, and Blockchain in Sports. A 2022 Sports Industry Outlook Report from Deloitte is optimistic about how blockchain technology will change the way fans interact with their favourite sports and hopes that new types of “collectibles” will lead to new markets for the industry. In this blog, we would like to take you on a journey of how blockchain is redefining the sports industry especially with respect to Fan Engagement and what it means for the future. A Quick Overview Of NFTs Blockchain technology is now widely popular in the sports industry. Most of the time, it is associated with the Initial Coin Offering (ICO), which is a way to encourage transparency and trust in fundraising. But what are NFTs or non-fungible tokens? The basic idea behind an NFT is that a token is used to represent ownership of unique items. The token, in other words, is a non-interchangeable unit of data stored on a blockchain, think of this as a digital ledger which retains history of ownership and value, that can be sold and traded. The NFT data units may be associated with digital files of art, photos, games, videos, and audio. While the Quantum NFT created by digital artist Kevin McCoy in 2014 is credited as the first NFT to be minted as a “monetized graphic”, the CryptoKitties, a blockchain based virtual game, NFTs created the first use case for wider adoption. It allowed people to trade and breed digital cats stored on smart contracts in the Ethereum blockchain. Much of Cryptokitties success is attributed with their ability to conform to the ERC-721 standard, explicitly developed for non-fungible tokens from the start. It allows users to store unique and identifiable data about any item that can be traded or sold in digital space, so ownership and movements are tracked through one smart contract. The Rise Of Fan Tokens The recent rise of fan tokens in the sports industry has also become a new trend as more teams and clubs adopt blockchain technology as a mainstream method for payment or transactions with fans. For example, a few European Soccer Clubs use fan tokens to reward their fans using blockchain technology and cryptocurrencies. Unlike the Non Fungible Token, Fan Tokens are fungible or interchangeable. Owners of Fan tokens can exchange them for services provided by a sports team or club. Fan tokens are digital assets that can interact with club ecosystems, including voting on fan polls and earning rewards. You can also use them off-platform to buy merchandise, match tickets, and other club-related content. Fan tokens are at times referred to as sports cryptocurrencies or utility tokens. Fan tokenization gives sports teams a way to monetize their vast global fan bases while creating a more direct relationship with fans than they’ve ever had before. Current Marketplaces Of NFT In the Sports Industry Now that you’re familiar with the NFT landscape, it’s time to dive right into its heart: marketplaces. These marketplaces are where the majority of NFT transactions happen. They’re one of the most critical puzzle pieces because they’re where all the action happens. When considering buying your first NFT, you should research each marketplace to determine which one is best. Here’s a list of a few top NFT marketplaces: OpenSea.io It features CryptoPunks, Decentraland items, Cryptovoxels assets, Axie Infinity items, etc. OpenSea offers a variety of non-fungible tokens—from art to censorship-resistant domains to virtual worlds to collectibles. It supports ERC721 and ERC1155 assets. Rarible.com This community-owned NFT marketplace rewards users who sell or buy unique digital assets on the market by distributing 75,000 RARI every week to active users. Foundation Foundation is a platform built to help digital artists and crypto-collectors connect and share ideas. It is a place where you can buy and sell artwork from top artists. SuperRare SuperRare is an online marketplace for people to buy and sell authentically created digital artworks, the ownership of which is recorded on the blockchain. It focuses on digital art pieces made by artists they personally select. AsyncArt Asynchronous Art is a way to purchase digital art that evolves. Buyers get to choose how the art is transformed through an app that updates the piece. KnownOrigin A curated gallery of exclusive Digital Artworks ranging from limited collectible editions to open prints and artist editions on blockchain technology using Ethereum smart contracts and ERC721 tokens. Currently, it features work from over 100 artists, including Joe Hargreaves, Quibe, and Ben Giles. Nifty Gateway Nifty Gateway is an NFT marketplace that sells artworks from digital artists, brands, and celebrities. It is a sister company of Gemini (a licensed Cryptocurrency exchange and custodian). NBA Top Shot The National Basketball Association (NBA) in collaboration with Dapper Labs created NBA Top Shot, a blockchain-based platform that allows fans to buy, sell and trade numbered versions of specific, officially-licensed video highlights. The platform allows users to transact with cryptocurrency and conventional credit based payments. Dapper Labs has also forged a partnership with the National Football Association (NFL) to launch a similar platform for NFL content called NFL ALL DAY. Advantage Of Sport NFT Marketplaces Unlike traditional sports memorabilia, digital collectibles are not subject to physical wear and tear. You can store and exchange safely in crypto wallets. The digital nature of NFTs also makes them easier and cheaper to trade. NFTs are unique digital collectibles. They are authenticated on a blockchain, making them effectively impossible

Sports’ Tryst With NFTs Read More »

A Blog Article and A White Paper – What’s The Difference?

A Blog Article and A White Paper – What’s The Difference? ATMECS – Content Team [read_meter] According to the American Marketing Association, “Marketing is the activity, set of institutions, and processes for creating, communicating, delivering, and exchanging offerings that have value for customers, clients, partners, and society at large.” Marketing is the most significant element of every branding and advertising. As the world is continuously transforming and evolving, marketing has been a constant need of B2B enterprises, multinational companies, B2C companies etc., to make people aware of their products and services. And, sometimes, the way a company markets itself helps differentiate itself from its competitors. However, the techniques of marketing, especially digital marketing, evolve rapidly. Companies across the globe are constantly finding new and innovative ways to reach and serve their target audience via online content marketing. Petabytes of fresh online content gets created each day worldwide and this content could be in the form of video, short form and long form editorial or op-ed content, text brochures, blog posts, white papers, etc. Each type of content has its own pros and cons to it. In this article, we will brief you on two types of content – white papers & blogs – to help you understand what they are and how they can be used effectively as part of a digital marketing strategy. White papers – A brief introduction When it comes to B2B marketing, white papers are a widely used content marketing tool. Whitepapers highlight the usage and solutions that a company, or an enterprise, plans to offer to its customer via its products and services. White papers are authoritative publications to educate people about an issue, brand, cause, solution, etc. In most cases, white papers are the best tools for lead generation and product purchase guides. Approximately 64% of B2B enterprises, according to a Content Marketing Institute Research Study, use white papers for marketing their product. Since white papers are widely used in online content marketing and influencing consideration or conversion are always important objectives, it is critical to pay attention not just to the content but also to the product or service itself. As a business, you want to convert readers into purchasers, which you may achieve by discussing how the product or service can directly benefit them. The readers need to know why they require it, how it will assist them, what issues it will solve, and other pertinent details. Some people believe that white papers are basically product or service pitches. This is not true. However, it may easily come off as a sales pitch if you are not careful about managing the tone & messaging intensity. Sometimes, even blog posts tend to be perceived as selling an idea, product, or service. Therefore, it is crucial therefore to strike the correct balance between convincing and informing your audience while authoring either types of content. Blog posts – A brief overview A blog post is a technique of reaching out to target audiences directly and individually. It appears on the website briefing the customers about the product or service in approximately 350-800 words. More than 90% of content marketers use blog posts for marketing their products, according to SEMRush. A blog post is a collection of data, topics, or sentiments maintained in a log. It is clearly the priority of many B2B enterprises. Empirical studies show, companies see 55% more readers on the website when they make blog posts their priority. Mostly, every blog includes some visually appealing photographs to attract more readers. They have a shorter length and are easy to post. Blogs are an excellent method to grab the attention of your target audience. Therefore, they end up generating approximately 67% leads per month for a brand. Blog articles assist organizations in establishing relationships with their internet users and consumers. One of the most significant advantages is that they may improve your SEO with certain keywords. Let’s tabulate the differences between white papers and blog posts! A blog post and a white paper differ in many aspects. A blog post carries a practical perspective to the topic, whereas a white paper is an attempt to educate and promote a specific brand/idea/premise. Both the concepts are based on different core factors. Some extra points about white papers Most good white papers should have the following: The material of a white paper is often in the form of a downloadable PDF. It may also be necessary to provide an email address to download. This facilitates a continuation in the interaction between the company and the audience. The white paper’s beginning, particularly the first paragraph, should pique the reader’s interest. Consider it the elevator pitch for any content of your white paper. This is what makes it unique. Tips to combine white papers with blog posts: An excellent white paper will provide enough material for numerous blog entries. In reality, the following is the ideal technique for combining the two: Create a powerful white paper. Take one major idea from the white paper and write a blog post about it. Point to the landing page for the complete white paper at the conclusion of the blog. Repeat steps 2–3 until you’ve covered all of the white paper’s main points. This method makes use of your blog’s SEO strength to increase visibility and downloads for your white paper. The white paper is the main topic, while the blog articles are just the side projects presented as smaller pieces that link back to the main information.

A Blog Article and A White Paper – What’s The Difference? Read More »

Software Defined Networking (SDN)

Software Defined Networking (SDN) Anupam Jagdish Bhoutkar Software Defined Networking (SDN) What is SDN? Software-Defined Networking, or SDN, is a strategy that removes control and configuration from each individual network device and places that responsibility on a controller. When done correctly, a controller-based approach yields the following benefits: Automation Configuration consistency Enhanced software/firmware upgrades (easier, quicker, less downtime) Increased visibility into network events Cost reduction Increased performance Real-time remediation of network outages without human intervention Over the past decade, virtualization has been one of the biggest changes for organizations have ever seen. It has brought about real change in server provisioning by automating and streamlining the technology. However, it is a major setback that network/storage infrastructure was not modernized to keep up with the next wave business challenges like that of cloud computing. While virtualization completely focused on computer/server workloads, it was less concerned about the storage and network domain. Thus, fully deployed, and functional VMs did not change the traditional networking and storage strategies. SDN will bring about that flexibility and economy of software to hardware data centers which traditional networking failed to deliver. Traditional Network: Networks are getting too big and too complex to manage manually, one device at a time. The average network now has thousands of endpoints connected to countless routers, switches, firewalls, APs, load balancers, and optimizers. Scale alone dictates we cannot continue our current strategy. Businesses today demand that networking adopts a more agile methodology to keep up with organizational requirements and modern frameworks like AppDev. Any downtime is now frowned upon, even when planned. By now, SDN sounds like an ideal solution for today’s organizations, but it is important to understand the architecture, benefits, misconceptions, and limitations of it as well. Businesses today demand that networking adopts a more agile methodology to keep up with organizational requirements and modern frameworks like AppDev. Any downtime is now frowned upon, even when planned. Architecture of Software Defined Networking (SDN): SDN architecture separates the network into three distinguishable layers Application layer: Applications communicate with the control layer using northbound API and control layer communicates with data plane using southbound APIs. The SDN application layer, not surprisingly, contains the typical network applications or functions like intrusion detection systems, load balancing or firewalls. Control layer: The control layer is considered as the brain of SDN. The intelligence to this layer is provided by centralized SDN controller software. This controller resides on a server and manages policies and the flow of traffic throughout the network. Infrastructure layer: The physical switches in the network constitute the infrastructure layer. A traditional network uses a specialized appliance, such as a firewall or load balancer, whereas a software-defined network replaces the appliance with an application that uses the controller to manage the data plane behaviour. How it works? Before we define how SDN works, let us briefly touch upon what a switch is made of. A switch is a network device that consists of two components – the control plane and the forwarding plane. The forwarding plane is that piece of hardware other than the CPU that ensures packets are routed across the network. And now, how does this forwarding plane function? There comes the role of the control plane where the routing protocols reside and perform their work whose results control the forwarding plane tables and determine how packets are routed. Thus, in simple terms, SDN is a software that deploys control on the forwarding plane by a writing software that will expand or replace portions of the control plane on the switch provided by vendors like Cisco, Juniper etc. For example, protocols like OpenFlow will help in the evolution of SDN that is not actually tied to any vendor. We hope this explains how SDN has become an emerging architecture that is designed to manage and support virtual machines and the dynamic nature of today’s applications independent of the physical network. Different models of Software Defined Networking and Vendors Open SDN uses open protocols to control the virtual and physical devices responsible for routing the data packets. API SDN uses programming interfaces, often called southbound APIs, to control the flow of data to and from each device. Overlay Model SDN creates a virtual network above existing hardware, providing tunnels containing channels to data centers. This model then allocates bandwidth in each channel and assigns devices to each channel. Hybrid Model SDN combines SDN and traditional networking, allowing the optimal protocol to be assigned for each type of traffic. Hybrid SDN is often used as a phase-in approach to SDN. According to Gartner Critical Capabilities for Data Center and Cloud Networking 2020, listed below are some of top vendors of the industry who provides reliable, scalable and robust SDN solutions. Business Drivers and Challenges Reduced CAPEX: Centralized intelligence and implementation of logic in switches eliminate the deployment of thousands of switches. Thus, the total cost allocated for maintenance of switches is reduced as well as the total cost of network equipment. A huge chunk of organizations would want to revamp their traditional IT setup and upgrade to SDN for this major reason. Reduced OPEX: Now, when the network has been centralized, it leaves with just a few points of management. Thus, it will take only very few engineers to manage our new modern network. Moreover, it does give room for better utilization of existing hardware while decreasing the need for more expensive and high-end network equipment. Centrally Managed: SDN consolidates network intelligence, which provides a holistic view of the network configuration and activity. Programmable: The ability to directly program network features and configure network resources quickly and easily through automated SDN services. Deliver Agility and Flexibility: As business and application needs change, administrators can adjust network configuration as needed. Enable Innovation – Open Connectivity: SDN is based on and implemented via open standards. As a result, SDN streamlines network design and provides consistent networking in a vendor-neutral architecture. Common Misconception SDN is a significant architectural change over traditional networking infrastructure. However,

Software Defined Networking (SDN) Read More »

Optimizing Performance of Your Testing Team

Optimizing Performance of Your Testing Team   Velmurugan Kothandapani & ATMECS Content Team We live in a time where yesterday’s imagination has become today’s reality. Digital innovation, smart applications and machine intelligence are advancing at such a rapid pace, one may wonder: what happens between technological innovation, production/development and mass adoption of any new product? You may be surprised to know there is a tireless team of engineers who perform rigorous tests during any technology product development and deployment cycle to ensure innovation goes from labs to market swiftly. They are the Quality Assurance (QA) Team. Leaders of QA teams face a number of challenges implementing Test Automation “the right way” when the pace of innovation is so fast. Here are a few we have experienced first hand: Asking the right questions – early! The foundational paradigm of every testing team is to “Ask Better Questions” early on in the Software Development Life-Cycle (SDLC). A single flaw, albeit identified, late in the process could result in a higher cost. Needless to say, not catching a defect and, inadvertently, allowing it into production could result in a significant financial loss, company credibility and a loss of customer trust. Effective use of Artificial Intelligence The question is no longer whether or not to use AI but where AI should be deployed to get the best use out of it. As computing power, advancements in AI and debates on what a machine and man can or should do, grow everyday, it is important to demarcate roles and responsibilities of AI and people resources so that each one performs at their optimum for the advancement of human society. Here’s where business and IT leaders need to question whether liberating human testers from monotonous duties and allocating them to spend more time on exploratory testing is in the best interests of a company’s IT organization. After all, “The Art of Questioning” is what distinguishes humans from machines. Organizational Asynchronicity From sales and marketing to R&D, from development to testing, functional departments, more often than not, have their own KPIs and ways of functioning. This lends itself to teams working in silos following their own departmental SOPs. QA & Testing, while being the conscious keeper of any new product innovation, are often under prioritized. As a result, this leads to long product test life cycles, delayed product development, and delayed time to market. Challenges due to today’s global, digital world While the growth of digital technologies has enabled every company to make their product or service ubiquitous through global reach, it has also added a few headaches to Testing teams. Deploying test environments, on cloud vs on-prem and infrastructure challenges due to multiple customer touch points – platforms, devices, browsers – are all questions that keep testing teams up at night. Not to mention scalability issues when the volume of test modules, and test suites grow. Cumbersome Testing Framework Development Developing a Testing Framework while on-boarding an automation project is both time consuming, cost and resource intensive. It requires nuanced programming skill sets and versatile developers to be part of the framework development cycle. Absence of Right Tools Given the plethora of current and future challenges faced by a business in the post-pandemic era, it is imperative for IT leaders to “empower” its testers by providing them with the “best in class” tools and technologies. More often than not the “Testing” function is likened to a “Black Box”. This is so because there is a lack of proper reporting solutions to enable visibility into test coverage and executive intervention/decision making Introducing ATMECS FALCON – A Test Automation Platform, Testers and Team Leaders Love to Use ATMECS engineers have studied the testing landscape in depth and have developed an out of the box unified continuous testing platform to support testing of and quickly automate Web UI, Web Services, Restful Services, Mobile in one elegant platform. Falcon – an AI powered, intelligent and automated testing platform – has made testing and automation both effective, efficient and enjoyable for testing resources and team leaders. With parallel execution enabled for large test suite runs and centralized reporting to monitor and analyze all project test results in an intuitive user interface, once dreaded activities are now seamless, easy to complete and pleasurable for testers both in-house and at our client deployments. Additionally, what used to take over a week to accomplish now takes less than 15 minutes with Falcon. With timely quality reporting, dashboards and alerts, Falcon helps key IT stakeholders informed and in control of their testing process while setting up engineering teams for successful completion and deployments. Since Falcon works seamlessly with cloud technologies, on demand and at scale, our clients have testified that with Falcon, quality is no longer a serial activity after engineering builds but a parallel activity that agile teams can depend on through the build cycles. Sneak peek at Falcon – Highlights One Tool for Web, Mobile Native Apps, Web Services (Restful, SOAP) AI powered Smart Locator Generator that generates locators automagically for the UI elements of both web, native mobile apps. AI powered self-healing test scripts to automatically fix and adjust to changes in UI AI powered PDF files comparison Test Data Support in XML, Excel, JSON, DB (relational, document based) Built-in integration with Jira, Continuous Integration tool (Jenkins). Built-in integration with SauceCloud, BrowserStack (Cloud based platform for automated testing) AI integration for speed and accuracy The suite also provides a Lean version (without integration with above tools) with all key features of the framework Supported browsers are IE, Chrome, Firefox, Opera & Safari, while supported operating systems are Windows, Mac, Linux (thanks to the flexibility of Selenium) Integrated Centralized Report Dashboard for leadership team Manual testers can also use this framework to automate, with minimal training and without an in-depth understanding of the tool / framework / programming Contact Us to Know More!

Optimizing Performance of Your Testing Team Read More »

Improve Performance by Simple Cache Mechanism in SpringBoot Application

Mobile Cloud Computing – Overview, Challenges and the Future Harish Nankani Introduction – The Problem For any project, there are database calls. Sometimes the database calls are done from the loops due to the requirements in responding to UI. Such loops can be repetitive, causing a performance hit by calling a database multiple times. This blog can help solve this performance issue by using simple Cache implementation without using any additional libraries or frameworks. Problem with Code Consider DB calls are to find the object by Id, and if such call is made within for loop, then the code looks like this: List productsList = productRepo.search(keyword, pageable);ResponseDto response = new ResponseDto();for (Product product : productsList) {dto.setProduct(product);Company company = companyRepo.getById(product.getCompanyID());dto.setCompanyName(company.getName());} This illustration code is only to understand the concept and not the real code. It shows the result in UI – for the company name of each product, the DB call is made to get the company details within the for loop. If the products are huge, it will definitely impact the performance. Basic Cache Implementation Consider a simple cache class as CacheUtil. public class CacheUtil {private static Map<String, Company> companyMap = new HashMap<String, Company>();public static void setCompanyMap(String id, Company company) {this.companyMap.put(id, company);}public static Company getCompanyById(String id) {this.companyMap.get(id);}public static void clear() {this.companyMap.clear();}} The above code uses a static map to ensure that the cache is available for all requests. It provides the company object by reference id. How to use CacheUtil? There is a small twist in using this cache. The strategy is to make repo implementation custom. public interface CompanyRepoBasic extends JPARepository<String, Company> {} public interface CompanyRepoCustom {public Company getCompanyById(String id);} public interface CompanyRepo extends CompanyRepoBasic, CompanyRepoCustom {} public class CompanyRepoImpl implements CompanyRepoCustom {@Autowiredprivate CompanyRepoBasic companyRepoBasic; public Company getCompanyById(String id) {Company company = CacheUtil.getCompanyById(id);if (company == null) {company = companyRepoBasic.getById(id);CacheUtil.put(id, company);}return company;}} Final Call A slight modification is to be made in the for loop to make it work. As a custom repo is created with a different method name getCompanyById, the method call companyRepo.getById used in the for loop should be changed to companyRepo.getCompanyById, and that’s it. for(Product product : productsList) {dto.setProduct(product);Company company = companyRepo.getCompanyById(product.getCompanyID());dto.setCompanyName(company.getName());} How it works CompanyRepo implementation includes both basic repo calls and custom repo calls. getCompanyById is the custom method that checks for the company from the cache. If the cache does not include the company of such id, then it calls the DB using basic repo and puts it into the cache. So considering there are 100 products of the same company, then the for loop will not hit the DB 100 times with this cache implementation. It will hit DB once and then all the other 99 times; it will get the company object from the cache. Enhancements  Whenever the company object is saved or updated, the cache should be updated with the latest company object. This will always provide the latest company data for any request. Keep an expiry or clear the cache after some duration. Add one scheduler that runs every such configured duration to clear the cache. Multiple objects can be cached, and such implementation needs the modification of the above code (example shows only the company object). Use the map of maps or different maps for different objects. Make some property in application.properties or environment variables to set the number of objects that can be cached. For example, 1000 companies can be cached. If more than 1000 is being stored, keep the strategy to remove the oldest company. The bottom line We generally know that when it comes to offering the best experience to end-users, application performance is critical. Caching aids in this by acting as a bridge between the server and the end-user, delivering data on-demand real-time. So the more features added to the cache implementation, it becomes a custom cache library. Although caching may appear to be an afterthought in small applications, it is critical in complex ones.

Improve Performance by Simple Cache Mechanism in SpringBoot Application Read More »

Measuring Baseline Latency Metrics for Legacy Systems

Measuring Baseline Latency Metrics for Legacy Systems Guruprasad Rao What is a legacy system? Various types of legacy systems are built, from IBM Cobol, Turbo Pascal, to Borland Delphi. In the context of this blog, a legacy system is referred to as a system that was implemented in an earlier version of Delphi and prior to 2000. The diagram given below depicts the high-level architecture of the legacy system that will be considered as a legacy system for this blog. Challenges with the legacy system The biggest challenge of legacy systems is that there is no effective way to capture baseline performance latency metrics using currently available tools. If we can’t capture baseline latency metrics effectively, how do we check the current performance of your system? Why can’t we measure the baseline performance latency metrics? What is the root cause of not being able to measure it effectively? Root cause The performance of any modern application is measured using performance tools. Most available tools in the market use the L7 layer (HTTP/HTTPS/FTP/SMTP) based protocol to capture the latency. In contrast, legacy systems built with old technology programs use proprietary XML over IPC (XIPC) using OSI L4 protocol. The tools developed post-2000 have been matured to work with SOAP and REST on the L7 layer with little or no support for XIPC over the L4 OSI layer. This leaves us with two options for solving the problem: Option 1: Reengineering legacy systems to support SOAP and REST implementation. Reengineering the legacy systems may not be the optimum solution given the risks and concerns involved. With strong migration methodologies and reengineering techniques, migration may still be possible. But it involves time, and maintaining and testing them during these situations is tricky for business continuity and availability of skills in the market. Option 2: Analyzing and conceptualizing problems differently and understanding your current legacy system in relation to the support available in the open-source community. Excluding use cases that require custom solutions. Identifying timelines and prioritizing use cases based on business needs that can be realized using open source. Finally taking the combination route of open source and custom implementation as an overall solution depending upon your legacy system complexity. Feasible solutions The section below identifies three feasible solutions in measuring network latency through load testing. You can choose the right one depending upon the interoperability maturity of your legacy system. Solution 1: XML payload over TCP (L4) In this method, TCP Clients send a proprietary XML payload to the server service and receive its responses. Distributed JMeter setup helps generate the desired number of threads (users) to perform the Load Test. All the slaves acting as load generators must be in the same network so that there is no discrepancy in the network latency, which impacts the result. Solution 2: Binary payload over TCP (L4) This solution uses binary data as part of the payload. This option is chosen when you have a lack of understanding of your system and, as a result, cannot define XML payload. Tools like Wireshark can be used to extract the data. The way of applying load is similar to solution 1. Solution 3: Build your own load testing tool over the L4 layer You use this solution when you are not able to use any of the open-source or commercial tools available to apply load due to technical challenges. In this solution, you build a wrapper (client application) on top of the L4 layer interface and launch multiple client application instances to perform load testing. The table below identifies guidelines on which solution to be considered for your legacy system and what benefit you gain from it. ATMECS solution Within ATMECS, we chose a mix of option 1 and option 3. Option 1 is using JMeter Master/slave setup modified to work with Winapp drivers. Use case: Winapp driver with JMeter/Selenium grid for Windows Desktop client-server legacy application :The ecosystem depicted below brings together various open-source tools available in the market of solving the challenge associated with capturing performance latency at the scale of a legacy application. This section will describe the purpose of using the following tools as part of the ecosystem: Selenium Grid/Appium web driver JMeter Master/SlaveMicrosoft Windows application (WinApp) Driver TFS Build server Selenium Grid/Appium Web driver It is used to scale by distributing and running tests on several machines to synchronize and manage multiple functionalities from the central point, making it easy to run tests against a vast combination of functional test cases. For example, managing emergency services in a control room requires synchronizing call taker functionality from the public with call dispatcher functionality to dispatch the police force to the incident location. The solution requires either Selenium grid or JMeter Master/slave. This article explains the setup using JMeter Master/slave setup; however, the same can be achieved using the Selenium Grid/Appium web driver combination. JMeter Master/Slave All the machines (master and slaves) are in the same (local) network. Among them, one machine is treated as a master, which controls the other slave machines during test execution. The slave machines follow the instructions initiated by the master machine. WinApp Driver WinAppDriver is a test framework developed by Microsoft as an open-source project; it’s an implementation of Appium, which is primarily a Mobile App framework, itself based on Selenium. Therefore WinAppDriver is a Selenium-like automation framework. This solution leverages the WinApp driver as part of functional testing for desktop legacy applications. TFS server/Azure DevOps server Used to set up a pipeline is a preconfigured set of steps that determine the build and deployment process every time there is an update on your code. The server hosts a build definition for the automated process and can save time on continuous integration. BDDfy Report By default, BDDfy also generates an HTML report called ‘BDDfy.Html’ in your project’s output folder: HTML test report shows the summary on the test results scenario along with the step result (and in case of an exception, the stack trace). You have

Measuring Baseline Latency Metrics for Legacy Systems Read More »

Atmecs Blog

Minting NFTs through API using Truffle & Rinkeby

Minting NFTs through API using Truffle & Rinkeby BHANU MOKKALA You need the image / art work / clip to be uploaded to IPFS. You can use any of the IPFS clients that allow you to upload the asset and pin it which will make the asset available for anyone to access through a link. I am using Pinata Cloud for IPFS It is the season of NFTs and DeFi. In case you have been living under a rock then you need read more about NFTs and DeFi using the following links. Non-fungible tokens (NFT) Decentralized finance (DeFi) Now that you understand the terms, let us understand how NFTs are minted. NFT market is definitely moving from a few minters to tools & techniques for content creators to mint NFTs on their own.The following are the key steps in minting a NFT. You need the image / art work / clip to be uploaded to IPFS. You can use any of the IPFS clients that allow you to upload the asset and pin it which will make the asset available for any one to access it through a link. I am using Pinata Cloud for IPFS. You need some test ethers on your Metamask Wallet. Once you installed Metamask Google Extension, load test ethers using the Rinkeby faucet. Also, load some LINK on your Rinkeby testnet address. I built these APIs on top of an existing repo by Patrick Collins. Check out the repo in the below GitHub link. Chainlink Random Character Creation The above example deals with minting a collection of ‘Dungeons and Dragons’ to Rinkeby. It has the following key steps. Step 1: truffle migrate –reset –network rinkeby Step 2: truffle exec scripts/fund-contract.js –network rinkeby Step 3: truffle exec scripts/generate-character.js –network rinkeby Step 4: truffle exec scripts/get-character.js –network rinkeby Step 5: truffle exec scripts/set-token-uri.js –network rinkeby Steps 1 & 2 deal with setting up Rinkeby connection and migrating the contracts related to NFT creation to Rinkeby Testnet. Steps 3, 4 & 5 include executing appropriate functions on the migrated contracts to randomly select characters and setting up metadata URI for the minted NFT. Please go through the README.md of the above repo to understand other set up details. The idea is to build a NodeJS application that will use the above discussed steps. We can a user Node’s Child Process to execute truffle commands on the CLI. Below is an example of wrapping up the first step in the Child Process call. app.get(‘/pushcontract’, async(req, res) => {try {const child = await spawn1(‘truffle migrate –reset –network rinkeby’, [], {shell: true});console.log(child.toString());res.send(‘Migrate contracts’);} catch (e) {console.log(e.stderr.toString())}}) Sample code of executing child process Just like above sample, we can create code to execute the remaining steps mentioned above to complete the minting process. Prior to executing these steps, we need to create the required contract and migrate it to Rinkeby testnet. We can also create contract needed for minting the NFT using file manipulation in NodeJS. We make changes to the ‘template’ contract on the fly using NodeJS fs library and then execute the truffle commands to migrate the contracts. app.post(‘/createcontract’, async(req, res) => { console.log(‘filename’, req.body.filename);files = fs.readdirSync(‘./contracts’);console.log(files);files.forEach(file => {const fileDir = path.join(‘./contracts/’, file);console.log(fileDir);if (file !== ‘Migrations.sol’) {try {fs.unlinkSync(fileDir);} catch (error) {console.log(error);} }})fs.copyFileSync(‘sample.sol’, ‘./contracts/’ + req.body.filename + ‘.sol’);const data = fs.readFileSync(‘./contracts/’ + req.body.filename + ‘.sol’, ‘utf8’);let result = data.replace(/DungeonsAndDragonsCharacter/g, req.body.filename);fs.writeFileSync(‘./contracts/’ + req.body.filename + ‘.sol’, result, ‘utf8’); fs.unlinkSync(‘./migrations/2_mycontract_migration.js’);fs.copyFileSync(‘2_mycontract_migration_backup.js’, ‘./migrations/2_mycontract_migration.js’);const data1 = fs.readFileSync(‘./migrations/2_mycontract_migration.js’, ‘utf8’);let result1 = data1.replace(/DungeonsAndDragonsCharacter/g, req.body.filename);fs.writeFileSync(‘./migrations/2_mycontract_migration.js’, result1, ‘utf8’);res.send(‘created contract’); }) Sample code of creating Contracts from the sample In the above code block, we are copying sample.sol to contracts folder after deleting all the other existing contracts from the contracts folder. After copying sample.sol to contracts folder with desired name, we selectively replace contents of the newly created contract based on the request received in the express API call. The NFTs minted through the above process can be viewed on the opensea Rinkeby testnet gallery. As discussed above, before we get ready with minting, we need to pin the image / art work to IPFS. We can build APIs for uploading and pinning the image to IPFS using Pinata, there are other ways as well. Please go through their docs to identify the APIs for uploading and pinning the image. Once the image is successfully uploaded Pinata APIs return CID which is a unique identifier for the uploaded file / image. https://ipfs.io/ipfs/xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx?filename=filename.pngThe final URI looks something like above. The ‘XXX’ is where the unique CID will be. We need to embed the image URI inside metadata JSON file before uploading the JSON file to IPFS. Please go through metadata folder in Dungeon & Dragons GitHub repo for more details on how the metadata JSON file should look like. app.post(‘/upload’, upload.single(‘File’),function(req, res) { console.log(req.file); data.append(‘file’, fs.createReadStream(req.file.path));data.append(‘pinataMetadata’, ‘{“name”:”‘ + req.file.filename + ‘”}’); var config = {method: ‘post’,url: ‘https://api.pinata.cloud/pinning/pinFileToIPFS’,headers: { ‘Content-Type’: ‘multipart/form-data’, ‘pinata_api_key’: <pinata api key>, ‘pinata_secret_api_key’: <pinata secret key>, …data.getHeaders()},data : data}; axios(config).then(function (response) {console.log(JSON.stringify(response.data));res.send(JSON.stringify(response.data));}).catch(function (error) {console.log(error);}); }); Sample code of uploading file to IPFS using Pinata Apart from the above, you can also plugin the market place from the opensea using the opensea api. Below is the sample ReactJS code to fetch the NFTs from opensea and display in a NFT Gallery. import React, {useState, useEffect } from ‘react’;import { Container, Row, Col, Card, Button } from ‘react-bootstrap’;import Imgix from ‘react-imgix’; function MarketPlace() {const [isLoading, setIsLoading] = useState(true);const [NFTs, setNFTs] = useState([]); useEffect(() => {setIsLoading(true);var requestOptions = {method: ‘GET’,redirect: ‘follow’}; fetch(“https://testnets-api.opensea.io/api/v1/assets?owner=xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx&offset=0&limit=50”, requestOptions).then(response => response.json()).then((result) => {console.log(‘Success:’, result);setIsLoading(false);const result1 = result.assets.filter(d => d.image_thumbnail_url !== null)setNFTs(result1);}).catch(error => console.log(‘error’, error));}, []); if (isLoading) {return (<section>Loading….</section>)} return (<div style={{ backgroundColor: ‘#111’}}><Container className=’mt-4′><Row>{NFTs.map(plan => (<Col md={3}><Card bg=”dark” text=”white”><div style={{ textAlign: ‘center’}}>{/* <Card.Img variant=”top” src={plan.image_thumbnail_url} style={{ width: “18rem”, height: “20rem” }} /> */}<Imgix src={plan.image_thumbnail_url} sizes=”800vw” />;</div><Card.Body><Card.Title>{plan.name}</Card.Title><Card.Text>{plan.description.replace(/^(.{20}[^s]*).*/, “$1”)}</Card.Text><Button variant=”primary” onClick={() => window.open(plan.permalink, “_blank”)}>Buy This NFT</Button></Card.Body></Card><Card style={{ backgroundColor: ‘#111’ }}><br></br></Card></Col> ))}</Row></Container></div>); } export default MarketPlace Code to extract minted NFTs from Opensea and display as a NFT Gallery This approach gives a better understanding of what goes into minting

Minting NFTs through API using Truffle & Rinkeby Read More »

atmecs blog

Importance of an Entrepreneurial Mindset For Employees

Importance of an Entrepreneurial Mindset For Employees ATMECS Content Team As companies prepare for an unpredictable post-pandemic future, employees need to be innovative and active, now more than ever. Having an entrepreneurial mindset is more of a necessity now. As per a Gallup Poll report, 87% of employees worldwide are not focused on their work. To make them more focused, promoting an entrepreneurial mindset-culture can help. Companies can get an edge over their competitors, win new and retain existing customers, and recruit top talent. Before we dive into how you can incorporate an entrepreneurial mindset and its benefits, let’s first understand the concept of an entrepreneurial mindset. What is an entrepreneurial mindset? According to a Forbes survey, entrepreneurs are the healthiest, most engaged individuals on the planet, finding meaning in their job and being inspired to solve issues. Their mindset and approach towards achieving their objectives is a key factor in their ability to engage in entrepreneurial activity. They are those who see opportunity in every challenge and seize every opportunity. As a result of their thinking style, they are inventors and developers who provide their company with the best opportunity to survive and grow. But how does that assist when it comes to employees having a similar mindset? It essentially indicates that they are gutsy in their judgment, self-driven, and passionate about what they do. Entrepreneurs go out of their way to get clients and take chances. This does not mean that employees need to pressure themselves or be reckless. Still, employees can implement many entrepreneurial skills such as passion, dedication, taking chances, and taking responsibility. Why is an entrepreneurial mindset important? The benefits for people who choose to lead may be substantial. However, the corporate sector is fraught with difficulties. Running a business or even considering starting one is not for everyone. Understandably, some people prefer the security of a 9-5 job with a steady paycheck. Creating a new venture is a brave step. It takes a certain degree of courage and determination while confronting up to its potential of disappointment. Regardless, an entrepreneurial mentality, when set in employees, shifts and helps determine an individual’s approach to problems. They have a unique perspective on things and the capacity to adapt, making them well-suited to developing a successful firm. There are mistakes and achievements around every turn in the corporate world. The entrepreneurial mindset and the traits and abilities that come with it are based on a drive to achieve. They see challenges as opportunities. Difference between employee vs. entrepreneur mindset Security vs. freedomIn reality, we don’t see an employee with an entrepreneurial mindset as much as we would like. People are somehow functioning to make “Job Security” the ultimate goal. One must complete high school, attend college, earn a degree, find a solid, well-paying, secure job with advantages, and save for retirement. Children with entrepreneurial parents have a 60% higher probability of starting their own business than children who do not have an entrepreneurial background. Entrepreneurs value security as well. They place a considerably higher value on freedom. Buying time for money vs. providing value for moneyEmployees make decisions based on the hour and operate a fixed number of hours per day for which they are compensated per hour at the end of every week. However, for someone with an entrepreneurial mindset, the idea of giving one of our most valuable assets, time, to benefit someone else is pure pain. Fear vs. self-motivationMore than often, employees are driven to the workplace because they fear losing their job security. Self-motivation is what an employee-mind lacks a lot. Entrepreneurs are motivated by concepts. They focus on providing value to their clients and customers. Being held responsible vs. self-accountabilityEmployees frequently want responsibility from others, their superiors. The boss tells employees what they need to do when they need to do it, and that it should be done correctly. When things don’t go according to plan, employees quickly indulge in fault-finding or shifting blame. Entrepreneurs must be responsible for themselves. They should be self-disciplined and complete the tasks that are required. People hold us accountable, our clients. However, as an entrepreneur, you will not have a supervisor or a time clock. So it’s up to you to be punctual, do tasks, and do them correctly. Henry Ford once famously said, “Quality means doing it right when no one is looking”. How companies can inculcate an entrepreneurial mindset in employees So, how can you develop an entrepreneurial attitude in the culture of your company? These five methods may be useful in getting you started. Encourage a single point of focus: the clientHelp employees realize that your firm is focused on the client, no matter what job they have or what task they execute. Assure them that everyone’s job has an impact on the client and their customers, whether directly or indirectly. Encourage a focus on customer service and happiness throughout the organization. By responding to questions such as these, you may inspire all of your coworkers to think like your clients: What is the client’s desire? How can I contribute to my client’s happiness? How can I improve the quality, speed, and ease of my client interactions? What does the client value so highly that pricing becomes less of a factor? Diversity vs. Knowledge sharingDiversity of knowledge may help to foster creativity and invention, both of which are important aspects of the entrepreneurial mindset. Try being more aware of your team’s cognitive variety so you can improve their performance and assist them in growing. Allow fresh ideas to flourishAllowing individuals to develop new and improved methods for whatever position they play is a good idea. When ideas mix with other ideas and take on new shapes, they can thrive. Encourage individuals to contribute any ideas that might help the firm make good improvements, such as keeping up with industry trends or trading off meetings’ frequency for quality. Employees can contribute innovative ideas, shortcuts, comments, and other proposed enhancements to an internal blog.

Importance of an Entrepreneurial Mindset For Employees Read More »

cloud computing

Mobile Cloud Computing – Overview, Challenges and the Future

Mobile Cloud Computing – Overview, Challenges and the Future ATMECS – Content Team At present, mobile applications have reached a level of advancement that seems almost impossible. Individuals can carry out actions like voice commands, face recognition, and more with a simple handheld device. App developers now possess the ability to create applications that have an impressive degree of user-friendliness. This is only because of the massive proliferation of Mobile Cloud Computing. The Definition of Mobile Cloud Computing Mobile Cloud Computing or MCC, for short, is a conjunction of three technologies, namely, cloud computing, mobile computing, and finally, a wireless network. All three components act together to create an application that provides extensive computational resources to a user. The use of MCC benefits the user as well as the cloud provider. Users get the benefits of high storage and easy access, while the service provider gets the user fee from a good number of users. Being a win-win model, MCC has witnessed a rise in demand and has also emerged as a popular option for app developers. This is so due to the lack of restrictions that the mobile cloud offers during app development. A regular app development faces constraints like the limited space that mobile devices possess as well as the operating system. With the combination of mobile and cloud computing, developers can ensure that tasks like data processing and data storage take place seamlessly. Challenges accompanying mobile cloud computing Though it may sound that the use of MCC to develop applications is like a walk in the park, it is not so in practice. A few challenges that crop up while using this technology to develop apps include, Less network bandwidthCarrying out deployment using MCCs requires the communication to be continuous. This means that a developer may face problems if the network being used is wireless. This is because wireless networks tend to be less reliable or possess low bandwidth. For example, 3G, Wi-Fi, or 4G networks. Therefore, the speed of the applications is much slower in comparison to wired networks. While 5G networks remain a ray of hope, it is much too early to decide its effectiveness. Service availabilityMobile users may attain a very low-frequency signal, hindering the speed, as well as the storage capacity of the application. Moreover, users also experience issues like breakdown, transportation crowding, and lack of coverage. Hardware IssuesMobile phones, even with the latest technology, have a finite source of energy, i.e., batteries. Cloud-based apps increase the use of the battery and would, therefore, drain it much more quickly. This can hinder MCC development as the user base can potentially decline along with an increase of complaints regarding the impact on the battery life. Operating System IssueThe applications created using MCC will function on different operating systems. Therefore, the application must be compatible with operating system platforms like Android, iOS, and Windows Phone. To do so, the development team must possess knowledge regarding an IRNA or Intelligent Radio Network Access technique. Security IssuesThe management and identification of threats have proved to be a challenging task. This is because MCCs function on a wireless network. Therefore, there are more chances of overlooking or the general absence of network information. Moreover, with multiple hand-offs within the architecture and a general lack of multi-layer security, vulnerabilities are high. The security related issues stem from vulnerabilities in the MCC architecture. With multiple users accessing the clouds there is a threat to the safety of data. Say if the security of one of the user data is breached then there are risks at other users as well. The future of mobile cloud computing Mobile Cloud Computing is a growing industrial space in itself. As per the stats from Mordor Intelligence by 2020 the global mobile cloud computing market registered a total value of over USD 30 Million. The industry growing at a CAGR of 25.28% is expected to reach USD 118.70 billions by 2026. There would be more scope for startups to rise, as an MCC business doesn’t not require the significant investment amount that goes in setting up a brick and mortar office setup. Moreover, the rise of cloud computing as a need by firms only presents a brighter future for firms starting business in the space. This rise in demand of MCC can be attributed to the following: Real-time easy data accessThe storage of data on the cloud makes it possible for users to easily find their data in a single location, owing to the presence of data synchronization facilities between two devices or a device and a desktop. Therefore, data can be accessed anytime, anywhere on any device in a real-time easy to go manner. Massive space for storageAs mentioned before, computing takes place on a cloud which is known for its high storage capacity. Therefore, users need not worry about shelling out money for external memory cards or using their internal memory. Extension of battery lifeSince data processing takes place on the cloud, the device’s battery need not do much of the heavy lifting. Therefore, there is less strain on the device battery as a cloud-based application runs in the background. Mobile Cloud Computing certainly makes app development easier with its lack of restrictions. Furthermore, it gives users easy access to data and better storage. With these many benefits, there is no surprise that 35% of successful mobile application development projects use cloud-based app development. This demand is only likely to increase in the future as sectors like healthcare and fitness adopt MCC for developing enterprise or consumer-centric applications.

Mobile Cloud Computing – Overview, Challenges and the Future Read More »