Team Atmecs

Software Defined Networking (SDN)

Software Defined Networking (SDN) Anupam Jagdish Bhoutkar Software Defined Networking (SDN) What is SDN? Software-Defined Networking, or SDN, is a strategy that removes control and configuration from each individual network device and places that responsibility on a controller. When done correctly, a controller-based approach yields the following benefits: Automation Configuration consistency Enhanced software/firmware upgrades (easier, quicker, less downtime) Increased visibility into network events Cost reduction Increased performance Real-time remediation of network outages without human intervention Over the past decade, virtualization has been one of the biggest changes for organizations have ever seen. It has brought about real change in server provisioning by automating and streamlining the technology. However, it is a major setback that network/storage infrastructure was not modernized to keep up with the next wave business challenges like that of cloud computing. While virtualization completely focused on computer/server workloads, it was less concerned about the storage and network domain. Thus, fully deployed, and functional VMs did not change the traditional networking and storage strategies. SDN will bring about that flexibility and economy of software to hardware data centers which traditional networking failed to deliver. Traditional Network: Networks are getting too big and too complex to manage manually, one device at a time. The average network now has thousands of endpoints connected to countless routers, switches, firewalls, APs, load balancers, and optimizers. Scale alone dictates we cannot continue our current strategy. Businesses today demand that networking adopts a more agile methodology to keep up with organizational requirements and modern frameworks like AppDev. Any downtime is now frowned upon, even when planned. By now, SDN sounds like an ideal solution for today’s organizations, but it is important to understand the architecture, benefits, misconceptions, and limitations of it as well. Businesses today demand that networking adopts a more agile methodology to keep up with organizational requirements and modern frameworks like AppDev. Any downtime is now frowned upon, even when planned. Architecture of Software Defined Networking (SDN): SDN architecture separates the network into three distinguishable layers Application layer: Applications communicate with the control layer using northbound API and control layer communicates with data plane using southbound APIs. The SDN application layer, not surprisingly, contains the typical network applications or functions like intrusion detection systems, load balancing or firewalls. Control layer: The control layer is considered as the brain of SDN. The intelligence to this layer is provided by centralized SDN controller software. This controller resides on a server and manages policies and the flow of traffic throughout the network. Infrastructure layer: The physical switches in the network constitute the infrastructure layer. A traditional network uses a specialized appliance, such as a firewall or load balancer, whereas a software-defined network replaces the appliance with an application that uses the controller to manage the data plane behaviour. How it works? Before we define how SDN works, let us briefly touch upon what a switch is made of. A switch is a network device that consists of two components – the control plane and the forwarding plane. The forwarding plane is that piece of hardware other than the CPU that ensures packets are routed across the network. And now, how does this forwarding plane function? There comes the role of the control plane where the routing protocols reside and perform their work whose results control the forwarding plane tables and determine how packets are routed. Thus, in simple terms, SDN is a software that deploys control on the forwarding plane by a writing software that will expand or replace portions of the control plane on the switch provided by vendors like Cisco, Juniper etc. For example, protocols like OpenFlow will help in the evolution of SDN that is not actually tied to any vendor. We hope this explains how SDN has become an emerging architecture that is designed to manage and support virtual machines and the dynamic nature of today’s applications independent of the physical network. Different models of Software Defined Networking and Vendors Open SDN uses open protocols to control the virtual and physical devices responsible for routing the data packets. API SDN uses programming interfaces, often called southbound APIs, to control the flow of data to and from each device. Overlay Model SDN creates a virtual network above existing hardware, providing tunnels containing channels to data centers. This model then allocates bandwidth in each channel and assigns devices to each channel. Hybrid Model SDN combines SDN and traditional networking, allowing the optimal protocol to be assigned for each type of traffic. Hybrid SDN is often used as a phase-in approach to SDN. According to Gartner Critical Capabilities for Data Center and Cloud Networking 2020, listed below are some of top vendors of the industry who provides reliable, scalable and robust SDN solutions. Business Drivers and Challenges Reduced CAPEX: Centralized intelligence and implementation of logic in switches eliminate the deployment of thousands of switches. Thus, the total cost allocated for maintenance of switches is reduced as well as the total cost of network equipment. A huge chunk of organizations would want to revamp their traditional IT setup and upgrade to SDN for this major reason. Reduced OPEX: Now, when the network has been centralized, it leaves with just a few points of management. Thus, it will take only very few engineers to manage our new modern network. Moreover, it does give room for better utilization of existing hardware while decreasing the need for more expensive and high-end network equipment. Centrally Managed: SDN consolidates network intelligence, which provides a holistic view of the network configuration and activity. Programmable: The ability to directly program network features and configure network resources quickly and easily through automated SDN services. Deliver Agility and Flexibility: As business and application needs change, administrators can adjust network configuration as needed. Enable Innovation – Open Connectivity: SDN is based on and implemented via open standards. As a result, SDN streamlines network design and provides consistent networking in a vendor-neutral architecture. Common Misconception SDN is a significant architectural change over traditional networking infrastructure. However,

Software Defined Networking (SDN) Read More »

Optimizing Performance of Your Testing Team

Optimizing Performance of Your Testing Team   Velmurugan Kothandapani & ATMECS Content Team We live in a time where yesterday’s imagination has become today’s reality. Digital innovation, smart applications and machine intelligence are advancing at such a rapid pace, one may wonder: what happens between technological innovation, production/development and mass adoption of any new product? You may be surprised to know there is a tireless team of engineers who perform rigorous tests during any technology product development and deployment cycle to ensure innovation goes from labs to market swiftly. They are the Quality Assurance (QA) Team. Leaders of QA teams face a number of challenges implementing Test Automation “the right way” when the pace of innovation is so fast. Here are a few we have experienced first hand: Asking the right questions – early! The foundational paradigm of every testing team is to “Ask Better Questions” early on in the Software Development Life-Cycle (SDLC). A single flaw, albeit identified, late in the process could result in a higher cost. Needless to say, not catching a defect and, inadvertently, allowing it into production could result in a significant financial loss, company credibility and a loss of customer trust. Effective use of Artificial Intelligence The question is no longer whether or not to use AI but where AI should be deployed to get the best use out of it. As computing power, advancements in AI and debates on what a machine and man can or should do, grow everyday, it is important to demarcate roles and responsibilities of AI and people resources so that each one performs at their optimum for the advancement of human society. Here’s where business and IT leaders need to question whether liberating human testers from monotonous duties and allocating them to spend more time on exploratory testing is in the best interests of a company’s IT organization. After all, “The Art of Questioning” is what distinguishes humans from machines. Organizational Asynchronicity From sales and marketing to R&D, from development to testing, functional departments, more often than not, have their own KPIs and ways of functioning. This lends itself to teams working in silos following their own departmental SOPs. QA & Testing, while being the conscious keeper of any new product innovation, are often under prioritized. As a result, this leads to long product test life cycles, delayed product development, and delayed time to market. Challenges due to today’s global, digital world While the growth of digital technologies has enabled every company to make their product or service ubiquitous through global reach, it has also added a few headaches to Testing teams. Deploying test environments, on cloud vs on-prem and infrastructure challenges due to multiple customer touch points – platforms, devices, browsers – are all questions that keep testing teams up at night. Not to mention scalability issues when the volume of test modules, and test suites grow. Cumbersome Testing Framework Development Developing a Testing Framework while on-boarding an automation project is both time consuming, cost and resource intensive. It requires nuanced programming skill sets and versatile developers to be part of the framework development cycle. Absence of Right Tools Given the plethora of current and future challenges faced by a business in the post-pandemic era, it is imperative for IT leaders to “empower” its testers by providing them with the “best in class” tools and technologies. More often than not the “Testing” function is likened to a “Black Box”. This is so because there is a lack of proper reporting solutions to enable visibility into test coverage and executive intervention/decision making Introducing ATMECS FALCON – A Test Automation Platform, Testers and Team Leaders Love to Use ATMECS engineers have studied the testing landscape in depth and have developed an out of the box unified continuous testing platform to support testing of and quickly automate Web UI, Web Services, Restful Services, Mobile in one elegant platform. Falcon – an AI powered, intelligent and automated testing platform – has made testing and automation both effective, efficient and enjoyable for testing resources and team leaders. With parallel execution enabled for large test suite runs and centralized reporting to monitor and analyze all project test results in an intuitive user interface, once dreaded activities are now seamless, easy to complete and pleasurable for testers both in-house and at our client deployments. Additionally, what used to take over a week to accomplish now takes less than 15 minutes with Falcon. With timely quality reporting, dashboards and alerts, Falcon helps key IT stakeholders informed and in control of their testing process while setting up engineering teams for successful completion and deployments. Since Falcon works seamlessly with cloud technologies, on demand and at scale, our clients have testified that with Falcon, quality is no longer a serial activity after engineering builds but a parallel activity that agile teams can depend on through the build cycles. Sneak peek at Falcon – Highlights One Tool for Web, Mobile Native Apps, Web Services (Restful, SOAP) AI powered Smart Locator Generator that generates locators automagically for the UI elements of both web, native mobile apps. AI powered self-healing test scripts to automatically fix and adjust to changes in UI AI powered PDF files comparison Test Data Support in XML, Excel, JSON, DB (relational, document based) Built-in integration with Jira, Continuous Integration tool (Jenkins). Built-in integration with SauceCloud, BrowserStack (Cloud based platform for automated testing) AI integration for speed and accuracy The suite also provides a Lean version (without integration with above tools) with all key features of the framework Supported browsers are IE, Chrome, Firefox, Opera & Safari, while supported operating systems are Windows, Mac, Linux (thanks to the flexibility of Selenium) Integrated Centralized Report Dashboard for leadership team Manual testers can also use this framework to automate, with minimal training and without an in-depth understanding of the tool / framework / programming Contact Us to Know More!

Optimizing Performance of Your Testing Team Read More »

Improve Performance by Simple Cache Mechanism in SpringBoot Application

Mobile Cloud Computing – Overview, Challenges and the Future Harish Nankani Introduction – The Problem For any project, there are database calls. Sometimes the database calls are done from the loops due to the requirements in responding to UI. Such loops can be repetitive, causing a performance hit by calling a database multiple times. This blog can help solve this performance issue by using simple Cache implementation without using any additional libraries or frameworks. Problem with Code Consider DB calls are to find the object by Id, and if such call is made within for loop, then the code looks like this: List productsList = productRepo.search(keyword, pageable);ResponseDto response = new ResponseDto();for (Product product : productsList) {dto.setProduct(product);Company company = companyRepo.getById(product.getCompanyID());dto.setCompanyName(company.getName());} This illustration code is only to understand the concept and not the real code. It shows the result in UI – for the company name of each product, the DB call is made to get the company details within the for loop. If the products are huge, it will definitely impact the performance. Basic Cache Implementation Consider a simple cache class as CacheUtil. public class CacheUtil {private static Map<String, Company> companyMap = new HashMap<String, Company>();public static void setCompanyMap(String id, Company company) {this.companyMap.put(id, company);}public static Company getCompanyById(String id) {this.companyMap.get(id);}public static void clear() {this.companyMap.clear();}} The above code uses a static map to ensure that the cache is available for all requests. It provides the company object by reference id. How to use CacheUtil? There is a small twist in using this cache. The strategy is to make repo implementation custom. public interface CompanyRepoBasic extends JPARepository<String, Company> {} public interface CompanyRepoCustom {public Company getCompanyById(String id);} public interface CompanyRepo extends CompanyRepoBasic, CompanyRepoCustom {} public class CompanyRepoImpl implements CompanyRepoCustom {@Autowiredprivate CompanyRepoBasic companyRepoBasic; public Company getCompanyById(String id) {Company company = CacheUtil.getCompanyById(id);if (company == null) {company = companyRepoBasic.getById(id);CacheUtil.put(id, company);}return company;}} Final Call A slight modification is to be made in the for loop to make it work. As a custom repo is created with a different method name getCompanyById, the method call companyRepo.getById used in the for loop should be changed to companyRepo.getCompanyById, and that’s it. for(Product product : productsList) {dto.setProduct(product);Company company = companyRepo.getCompanyById(product.getCompanyID());dto.setCompanyName(company.getName());} How it works CompanyRepo implementation includes both basic repo calls and custom repo calls. getCompanyById is the custom method that checks for the company from the cache. If the cache does not include the company of such id, then it calls the DB using basic repo and puts it into the cache. So considering there are 100 products of the same company, then the for loop will not hit the DB 100 times with this cache implementation. It will hit DB once and then all the other 99 times; it will get the company object from the cache. Enhancements  Whenever the company object is saved or updated, the cache should be updated with the latest company object. This will always provide the latest company data for any request. Keep an expiry or clear the cache after some duration. Add one scheduler that runs every such configured duration to clear the cache. Multiple objects can be cached, and such implementation needs the modification of the above code (example shows only the company object). Use the map of maps or different maps for different objects. Make some property in application.properties or environment variables to set the number of objects that can be cached. For example, 1000 companies can be cached. If more than 1000 is being stored, keep the strategy to remove the oldest company. The bottom line We generally know that when it comes to offering the best experience to end-users, application performance is critical. Caching aids in this by acting as a bridge between the server and the end-user, delivering data on-demand real-time. So the more features added to the cache implementation, it becomes a custom cache library. Although caching may appear to be an afterthought in small applications, it is critical in complex ones.

Improve Performance by Simple Cache Mechanism in SpringBoot Application Read More »

Measuring Baseline Latency Metrics for Legacy Systems

Measuring Baseline Latency Metrics for Legacy Systems Guruprasad Rao What is a legacy system? Various types of legacy systems are built, from IBM Cobol, Turbo Pascal, to Borland Delphi. In the context of this blog, a legacy system is referred to as a system that was implemented in an earlier version of Delphi and prior to 2000. The diagram given below depicts the high-level architecture of the legacy system that will be considered as a legacy system for this blog. Challenges with the legacy system The biggest challenge of legacy systems is that there is no effective way to capture baseline performance latency metrics using currently available tools. If we can’t capture baseline latency metrics effectively, how do we check the current performance of your system? Why can’t we measure the baseline performance latency metrics? What is the root cause of not being able to measure it effectively? Root cause The performance of any modern application is measured using performance tools. Most available tools in the market use the L7 layer (HTTP/HTTPS/FTP/SMTP) based protocol to capture the latency. In contrast, legacy systems built with old technology programs use proprietary XML over IPC (XIPC) using OSI L4 protocol. The tools developed post-2000 have been matured to work with SOAP and REST on the L7 layer with little or no support for XIPC over the L4 OSI layer. This leaves us with two options for solving the problem: Option 1: Reengineering legacy systems to support SOAP and REST implementation. Reengineering the legacy systems may not be the optimum solution given the risks and concerns involved. With strong migration methodologies and reengineering techniques, migration may still be possible. But it involves time, and maintaining and testing them during these situations is tricky for business continuity and availability of skills in the market. Option 2: Analyzing and conceptualizing problems differently and understanding your current legacy system in relation to the support available in the open-source community. Excluding use cases that require custom solutions. Identifying timelines and prioritizing use cases based on business needs that can be realized using open source. Finally taking the combination route of open source and custom implementation as an overall solution depending upon your legacy system complexity. Feasible solutions The section below identifies three feasible solutions in measuring network latency through load testing. You can choose the right one depending upon the interoperability maturity of your legacy system. Solution 1: XML payload over TCP (L4) In this method, TCP Clients send a proprietary XML payload to the server service and receive its responses. Distributed JMeter setup helps generate the desired number of threads (users) to perform the Load Test. All the slaves acting as load generators must be in the same network so that there is no discrepancy in the network latency, which impacts the result. Solution 2: Binary payload over TCP (L4) This solution uses binary data as part of the payload. This option is chosen when you have a lack of understanding of your system and, as a result, cannot define XML payload. Tools like Wireshark can be used to extract the data. The way of applying load is similar to solution 1. Solution 3: Build your own load testing tool over the L4 layer You use this solution when you are not able to use any of the open-source or commercial tools available to apply load due to technical challenges. In this solution, you build a wrapper (client application) on top of the L4 layer interface and launch multiple client application instances to perform load testing. The table below identifies guidelines on which solution to be considered for your legacy system and what benefit you gain from it. ATMECS solution Within ATMECS, we chose a mix of option 1 and option 3. Option 1 is using JMeter Master/slave setup modified to work with Winapp drivers. Use case: Winapp driver with JMeter/Selenium grid for Windows Desktop client-server legacy application :The ecosystem depicted below brings together various open-source tools available in the market of solving the challenge associated with capturing performance latency at the scale of a legacy application. This section will describe the purpose of using the following tools as part of the ecosystem: Selenium Grid/Appium web driver JMeter Master/SlaveMicrosoft Windows application (WinApp) Driver TFS Build server Selenium Grid/Appium Web driver It is used to scale by distributing and running tests on several machines to synchronize and manage multiple functionalities from the central point, making it easy to run tests against a vast combination of functional test cases. For example, managing emergency services in a control room requires synchronizing call taker functionality from the public with call dispatcher functionality to dispatch the police force to the incident location. The solution requires either Selenium grid or JMeter Master/slave. This article explains the setup using JMeter Master/slave setup; however, the same can be achieved using the Selenium Grid/Appium web driver combination. JMeter Master/Slave All the machines (master and slaves) are in the same (local) network. Among them, one machine is treated as a master, which controls the other slave machines during test execution. The slave machines follow the instructions initiated by the master machine. WinApp Driver WinAppDriver is a test framework developed by Microsoft as an open-source project; it’s an implementation of Appium, which is primarily a Mobile App framework, itself based on Selenium. Therefore WinAppDriver is a Selenium-like automation framework. This solution leverages the WinApp driver as part of functional testing for desktop legacy applications. TFS server/Azure DevOps server Used to set up a pipeline is a preconfigured set of steps that determine the build and deployment process every time there is an update on your code. The server hosts a build definition for the automated process and can save time on continuous integration. BDDfy Report By default, BDDfy also generates an HTML report called ‘BDDfy.Html’ in your project’s output folder: HTML test report shows the summary on the test results scenario along with the step result (and in case of an exception, the stack trace). You have

Measuring Baseline Latency Metrics for Legacy Systems Read More »

Atmecs Blog

Minting NFTs through API using Truffle & Rinkeby

Minting NFTs through API using Truffle & Rinkeby BHANU MOKKALA You need the image / art work / clip to be uploaded to IPFS. You can use any of the IPFS clients that allow you to upload the asset and pin it which will make the asset available for anyone to access through a link. I am using Pinata Cloud for IPFS It is the season of NFTs and DeFi. In case you have been living under a rock then you need read more about NFTs and DeFi using the following links. Non-fungible tokens (NFT) Decentralized finance (DeFi) Now that you understand the terms, let us understand how NFTs are minted. NFT market is definitely moving from a few minters to tools & techniques for content creators to mint NFTs on their own.The following are the key steps in minting a NFT. You need the image / art work / clip to be uploaded to IPFS. You can use any of the IPFS clients that allow you to upload the asset and pin it which will make the asset available for any one to access it through a link. I am using Pinata Cloud for IPFS. You need some test ethers on your Metamask Wallet. Once you installed Metamask Google Extension, load test ethers using the Rinkeby faucet. Also, load some LINK on your Rinkeby testnet address. I built these APIs on top of an existing repo by Patrick Collins. Check out the repo in the below GitHub link. Chainlink Random Character Creation The above example deals with minting a collection of ‘Dungeons and Dragons’ to Rinkeby. It has the following key steps. Step 1: truffle migrate –reset –network rinkeby Step 2: truffle exec scripts/fund-contract.js –network rinkeby Step 3: truffle exec scripts/generate-character.js –network rinkeby Step 4: truffle exec scripts/get-character.js –network rinkeby Step 5: truffle exec scripts/set-token-uri.js –network rinkeby Steps 1 & 2 deal with setting up Rinkeby connection and migrating the contracts related to NFT creation to Rinkeby Testnet. Steps 3, 4 & 5 include executing appropriate functions on the migrated contracts to randomly select characters and setting up metadata URI for the minted NFT. Please go through the README.md of the above repo to understand other set up details. The idea is to build a NodeJS application that will use the above discussed steps. We can a user Node’s Child Process to execute truffle commands on the CLI. Below is an example of wrapping up the first step in the Child Process call. app.get(‘/pushcontract’, async(req, res) => {try {const child = await spawn1(‘truffle migrate –reset –network rinkeby’, [], {shell: true});console.log(child.toString());res.send(‘Migrate contracts’);} catch (e) {console.log(e.stderr.toString())}}) Sample code of executing child process Just like above sample, we can create code to execute the remaining steps mentioned above to complete the minting process. Prior to executing these steps, we need to create the required contract and migrate it to Rinkeby testnet. We can also create contract needed for minting the NFT using file manipulation in NodeJS. We make changes to the ‘template’ contract on the fly using NodeJS fs library and then execute the truffle commands to migrate the contracts. app.post(‘/createcontract’, async(req, res) => { console.log(‘filename’, req.body.filename);files = fs.readdirSync(‘./contracts’);console.log(files);files.forEach(file => {const fileDir = path.join(‘./contracts/’, file);console.log(fileDir);if (file !== ‘Migrations.sol’) {try {fs.unlinkSync(fileDir);} catch (error) {console.log(error);} }})fs.copyFileSync(‘sample.sol’, ‘./contracts/’ + req.body.filename + ‘.sol’);const data = fs.readFileSync(‘./contracts/’ + req.body.filename + ‘.sol’, ‘utf8’);let result = data.replace(/DungeonsAndDragonsCharacter/g, req.body.filename);fs.writeFileSync(‘./contracts/’ + req.body.filename + ‘.sol’, result, ‘utf8’); fs.unlinkSync(‘./migrations/2_mycontract_migration.js’);fs.copyFileSync(‘2_mycontract_migration_backup.js’, ‘./migrations/2_mycontract_migration.js’);const data1 = fs.readFileSync(‘./migrations/2_mycontract_migration.js’, ‘utf8’);let result1 = data1.replace(/DungeonsAndDragonsCharacter/g, req.body.filename);fs.writeFileSync(‘./migrations/2_mycontract_migration.js’, result1, ‘utf8’);res.send(‘created contract’); }) Sample code of creating Contracts from the sample In the above code block, we are copying sample.sol to contracts folder after deleting all the other existing contracts from the contracts folder. After copying sample.sol to contracts folder with desired name, we selectively replace contents of the newly created contract based on the request received in the express API call. The NFTs minted through the above process can be viewed on the opensea Rinkeby testnet gallery. As discussed above, before we get ready with minting, we need to pin the image / art work to IPFS. We can build APIs for uploading and pinning the image to IPFS using Pinata, there are other ways as well. Please go through their docs to identify the APIs for uploading and pinning the image. Once the image is successfully uploaded Pinata APIs return CID which is a unique identifier for the uploaded file / image. https://ipfs.io/ipfs/xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx?filename=filename.pngThe final URI looks something like above. The ‘XXX’ is where the unique CID will be. We need to embed the image URI inside metadata JSON file before uploading the JSON file to IPFS. Please go through metadata folder in Dungeon & Dragons GitHub repo for more details on how the metadata JSON file should look like. app.post(‘/upload’, upload.single(‘File’),function(req, res) { console.log(req.file); data.append(‘file’, fs.createReadStream(req.file.path));data.append(‘pinataMetadata’, ‘{“name”:”‘ + req.file.filename + ‘”}’); var config = {method: ‘post’,url: ‘https://api.pinata.cloud/pinning/pinFileToIPFS’,headers: { ‘Content-Type’: ‘multipart/form-data’, ‘pinata_api_key’: <pinata api key>, ‘pinata_secret_api_key’: <pinata secret key>, …data.getHeaders()},data : data}; axios(config).then(function (response) {console.log(JSON.stringify(response.data));res.send(JSON.stringify(response.data));}).catch(function (error) {console.log(error);}); }); Sample code of uploading file to IPFS using Pinata Apart from the above, you can also plugin the market place from the opensea using the opensea api. Below is the sample ReactJS code to fetch the NFTs from opensea and display in a NFT Gallery. import React, {useState, useEffect } from ‘react’;import { Container, Row, Col, Card, Button } from ‘react-bootstrap’;import Imgix from ‘react-imgix’; function MarketPlace() {const [isLoading, setIsLoading] = useState(true);const [NFTs, setNFTs] = useState([]); useEffect(() => {setIsLoading(true);var requestOptions = {method: ‘GET’,redirect: ‘follow’}; fetch(“https://testnets-api.opensea.io/api/v1/assets?owner=xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx&offset=0&limit=50”, requestOptions).then(response => response.json()).then((result) => {console.log(‘Success:’, result);setIsLoading(false);const result1 = result.assets.filter(d => d.image_thumbnail_url !== null)setNFTs(result1);}).catch(error => console.log(‘error’, error));}, []); if (isLoading) {return (<section>Loading….</section>)} return (<div style={{ backgroundColor: ‘#111’}}><Container className=’mt-4′><Row>{NFTs.map(plan => (<Col md={3}><Card bg=”dark” text=”white”><div style={{ textAlign: ‘center’}}>{/* <Card.Img variant=”top” src={plan.image_thumbnail_url} style={{ width: “18rem”, height: “20rem” }} /> */}<Imgix src={plan.image_thumbnail_url} sizes=”800vw” />;</div><Card.Body><Card.Title>{plan.name}</Card.Title><Card.Text>{plan.description.replace(/^(.{20}[^s]*).*/, “$1”)}</Card.Text><Button variant=”primary” onClick={() => window.open(plan.permalink, “_blank”)}>Buy This NFT</Button></Card.Body></Card><Card style={{ backgroundColor: ‘#111’ }}><br></br></Card></Col> ))}</Row></Container></div>); } export default MarketPlace Code to extract minted NFTs from Opensea and display as a NFT Gallery This approach gives a better understanding of what goes into minting

Minting NFTs through API using Truffle & Rinkeby Read More »

atmecs blog

Importance of an Entrepreneurial Mindset For Employees

Importance of an Entrepreneurial Mindset For Employees ATMECS Content Team As companies prepare for an unpredictable post-pandemic future, employees need to be innovative and active, now more than ever. Having an entrepreneurial mindset is more of a necessity now. As per a Gallup Poll report, 87% of employees worldwide are not focused on their work. To make them more focused, promoting an entrepreneurial mindset-culture can help. Companies can get an edge over their competitors, win new and retain existing customers, and recruit top talent. Before we dive into how you can incorporate an entrepreneurial mindset and its benefits, let’s first understand the concept of an entrepreneurial mindset. What is an entrepreneurial mindset? According to a Forbes survey, entrepreneurs are the healthiest, most engaged individuals on the planet, finding meaning in their job and being inspired to solve issues. Their mindset and approach towards achieving their objectives is a key factor in their ability to engage in entrepreneurial activity. They are those who see opportunity in every challenge and seize every opportunity. As a result of their thinking style, they are inventors and developers who provide their company with the best opportunity to survive and grow. But how does that assist when it comes to employees having a similar mindset? It essentially indicates that they are gutsy in their judgment, self-driven, and passionate about what they do. Entrepreneurs go out of their way to get clients and take chances. This does not mean that employees need to pressure themselves or be reckless. Still, employees can implement many entrepreneurial skills such as passion, dedication, taking chances, and taking responsibility. Why is an entrepreneurial mindset important? The benefits for people who choose to lead may be substantial. However, the corporate sector is fraught with difficulties. Running a business or even considering starting one is not for everyone. Understandably, some people prefer the security of a 9-5 job with a steady paycheck. Creating a new venture is a brave step. It takes a certain degree of courage and determination while confronting up to its potential of disappointment. Regardless, an entrepreneurial mentality, when set in employees, shifts and helps determine an individual’s approach to problems. They have a unique perspective on things and the capacity to adapt, making them well-suited to developing a successful firm. There are mistakes and achievements around every turn in the corporate world. The entrepreneurial mindset and the traits and abilities that come with it are based on a drive to achieve. They see challenges as opportunities. Difference between employee vs. entrepreneur mindset Security vs. freedomIn reality, we don’t see an employee with an entrepreneurial mindset as much as we would like. People are somehow functioning to make “Job Security” the ultimate goal. One must complete high school, attend college, earn a degree, find a solid, well-paying, secure job with advantages, and save for retirement. Children with entrepreneurial parents have a 60% higher probability of starting their own business than children who do not have an entrepreneurial background. Entrepreneurs value security as well. They place a considerably higher value on freedom. Buying time for money vs. providing value for moneyEmployees make decisions based on the hour and operate a fixed number of hours per day for which they are compensated per hour at the end of every week. However, for someone with an entrepreneurial mindset, the idea of giving one of our most valuable assets, time, to benefit someone else is pure pain. Fear vs. self-motivationMore than often, employees are driven to the workplace because they fear losing their job security. Self-motivation is what an employee-mind lacks a lot. Entrepreneurs are motivated by concepts. They focus on providing value to their clients and customers. Being held responsible vs. self-accountabilityEmployees frequently want responsibility from others, their superiors. The boss tells employees what they need to do when they need to do it, and that it should be done correctly. When things don’t go according to plan, employees quickly indulge in fault-finding or shifting blame. Entrepreneurs must be responsible for themselves. They should be self-disciplined and complete the tasks that are required. People hold us accountable, our clients. However, as an entrepreneur, you will not have a supervisor or a time clock. So it’s up to you to be punctual, do tasks, and do them correctly. Henry Ford once famously said, “Quality means doing it right when no one is looking”. How companies can inculcate an entrepreneurial mindset in employees So, how can you develop an entrepreneurial attitude in the culture of your company? These five methods may be useful in getting you started. Encourage a single point of focus: the clientHelp employees realize that your firm is focused on the client, no matter what job they have or what task they execute. Assure them that everyone’s job has an impact on the client and their customers, whether directly or indirectly. Encourage a focus on customer service and happiness throughout the organization. By responding to questions such as these, you may inspire all of your coworkers to think like your clients: What is the client’s desire? How can I contribute to my client’s happiness? How can I improve the quality, speed, and ease of my client interactions? What does the client value so highly that pricing becomes less of a factor? Diversity vs. Knowledge sharingDiversity of knowledge may help to foster creativity and invention, both of which are important aspects of the entrepreneurial mindset. Try being more aware of your team’s cognitive variety so you can improve their performance and assist them in growing. Allow fresh ideas to flourishAllowing individuals to develop new and improved methods for whatever position they play is a good idea. When ideas mix with other ideas and take on new shapes, they can thrive. Encourage individuals to contribute any ideas that might help the firm make good improvements, such as keeping up with industry trends or trading off meetings’ frequency for quality. Employees can contribute innovative ideas, shortcuts, comments, and other proposed enhancements to an internal blog.

Importance of an Entrepreneurial Mindset For Employees Read More »

cloud computing

Mobile Cloud Computing – Overview, Challenges and the Future

Mobile Cloud Computing – Overview, Challenges and the Future ATMECS – Content Team At present, mobile applications have reached a level of advancement that seems almost impossible. Individuals can carry out actions like voice commands, face recognition, and more with a simple handheld device. App developers now possess the ability to create applications that have an impressive degree of user-friendliness. This is only because of the massive proliferation of Mobile Cloud Computing. The Definition of Mobile Cloud Computing Mobile Cloud Computing or MCC, for short, is a conjunction of three technologies, namely, cloud computing, mobile computing, and finally, a wireless network. All three components act together to create an application that provides extensive computational resources to a user. The use of MCC benefits the user as well as the cloud provider. Users get the benefits of high storage and easy access, while the service provider gets the user fee from a good number of users. Being a win-win model, MCC has witnessed a rise in demand and has also emerged as a popular option for app developers. This is so due to the lack of restrictions that the mobile cloud offers during app development. A regular app development faces constraints like the limited space that mobile devices possess as well as the operating system. With the combination of mobile and cloud computing, developers can ensure that tasks like data processing and data storage take place seamlessly. Challenges accompanying mobile cloud computing Though it may sound that the use of MCC to develop applications is like a walk in the park, it is not so in practice. A few challenges that crop up while using this technology to develop apps include, Less network bandwidthCarrying out deployment using MCCs requires the communication to be continuous. This means that a developer may face problems if the network being used is wireless. This is because wireless networks tend to be less reliable or possess low bandwidth. For example, 3G, Wi-Fi, or 4G networks. Therefore, the speed of the applications is much slower in comparison to wired networks. While 5G networks remain a ray of hope, it is much too early to decide its effectiveness. Service availabilityMobile users may attain a very low-frequency signal, hindering the speed, as well as the storage capacity of the application. Moreover, users also experience issues like breakdown, transportation crowding, and lack of coverage. Hardware IssuesMobile phones, even with the latest technology, have a finite source of energy, i.e., batteries. Cloud-based apps increase the use of the battery and would, therefore, drain it much more quickly. This can hinder MCC development as the user base can potentially decline along with an increase of complaints regarding the impact on the battery life. Operating System IssueThe applications created using MCC will function on different operating systems. Therefore, the application must be compatible with operating system platforms like Android, iOS, and Windows Phone. To do so, the development team must possess knowledge regarding an IRNA or Intelligent Radio Network Access technique. Security IssuesThe management and identification of threats have proved to be a challenging task. This is because MCCs function on a wireless network. Therefore, there are more chances of overlooking or the general absence of network information. Moreover, with multiple hand-offs within the architecture and a general lack of multi-layer security, vulnerabilities are high. The security related issues stem from vulnerabilities in the MCC architecture. With multiple users accessing the clouds there is a threat to the safety of data. Say if the security of one of the user data is breached then there are risks at other users as well. The future of mobile cloud computing Mobile Cloud Computing is a growing industrial space in itself. As per the stats from Mordor Intelligence by 2020 the global mobile cloud computing market registered a total value of over USD 30 Million. The industry growing at a CAGR of 25.28% is expected to reach USD 118.70 billions by 2026. There would be more scope for startups to rise, as an MCC business doesn’t not require the significant investment amount that goes in setting up a brick and mortar office setup. Moreover, the rise of cloud computing as a need by firms only presents a brighter future for firms starting business in the space. This rise in demand of MCC can be attributed to the following: Real-time easy data accessThe storage of data on the cloud makes it possible for users to easily find their data in a single location, owing to the presence of data synchronization facilities between two devices or a device and a desktop. Therefore, data can be accessed anytime, anywhere on any device in a real-time easy to go manner. Massive space for storageAs mentioned before, computing takes place on a cloud which is known for its high storage capacity. Therefore, users need not worry about shelling out money for external memory cards or using their internal memory. Extension of battery lifeSince data processing takes place on the cloud, the device’s battery need not do much of the heavy lifting. Therefore, there is less strain on the device battery as a cloud-based application runs in the background. Mobile Cloud Computing certainly makes app development easier with its lack of restrictions. Furthermore, it gives users easy access to data and better storage. With these many benefits, there is no surprise that 35% of successful mobile application development projects use cloud-based app development. This demand is only likely to increase in the future as sectors like healthcare and fitness adopt MCC for developing enterprise or consumer-centric applications.

Mobile Cloud Computing – Overview, Challenges and the Future Read More »

bec

Understanding the Implications of Business Email Compromise Scams

Understanding the Implications of Business Email Compromise Scams Prabhakaran Parameswaran – Cybersecurity Services Team Enterprises and individuals alike have the potential to fall victim to more than 40 types of frauds. Out of these, frauds that take place when the attacker opts for Business Email Compromise (BEC) methods also pose a significant threat. As per the cybercrime reports compiled by the FBI, BEC scams account for over $1.8 billion cumulative loss globally. BEC attacks are said to be around 64 times more devastating than other cybercrimes due to the losses it incurs. What is a Business Email Compromise scam? A Business Email Compromise belongs to the realm of cybercrime. An attacker is capable of attacking enterprises or corporate email accounts. After doing so, the attacker will move to defraud the company as a whole or individual employee. The reason for their ability to carry out this fraud is that the attacker gains access to specific sensitive information. Mainstream media has also referred to this type of attack as the “man-in-the-mail” attack or the “man-in-the-middle” attack. The reason for this is that these attacks go undetected since the party on the receiving end thinks that they are capable of sending confidential emails to another party. However, the attacker will have gained access to all these emails. Who do BEC attackers target? These scams are directed towards companies the majority of the time. There are five ways this scam can take place: Compromising the account The hacker will gain access to a specific employee’s account and, therefore, use their identity to infiltrate the databases holding sensitive information. Fake invoice The hacker will look to target foreign suppliers in this case. The basis of this attack requires the hacker to act as a supplier then request payments to their account. Impersonation of an attorney Another common tactic is taking the identity of a legal representative. Once the hacker does so, they approach the employees for a fund transfer. Data theft The HR department falls victim to this kind of threat. The hacker will attain access to personal information about employees from the records. The employees are usually CEOs or higher-ups working in management. CEO fraud After the hacker is capable of obtaining access to CEO information, they are capable of assuming the identity of the CEO. Now, these individuals can send out fraudulent emails to the finance department. Steps that attackers utilise One of the best approaches to management security breaches involves tracing the steps of the attacker. This will not only help to examine the existing security measures but also predict potential future steps that the hacker might make. When it comes to BEC attacks, the attack takes place in the steps below: Searching for a target The hacker will first search for an enterprise and then a suitable employee working in the said enterprise. The hacker will attack based on one of the above methods. Hackers use various platforms like LinkedIn or company websites to search for any sort of contact information. Sending out emails The attacker will now send out emails to the targeted employees’ email account. The emails will contain malware and will be known as phishing emails. The links in this email will redirect the employee towards a fake Outlook-365 login webpage. This webpage is created by the attacker and looks exactly like an authentic page. Gathering information Once the attacker plugs in their login credentials in the dummy website, the attacker can then copy down the email address and password of the employee. The next step would be to create a fake domain that resembles the company. In this domain, the hacker will enter the victim’s email address and surpass the web filters. Now the attacker gains access to the email account, and the attacker will look to alter the real domain in a way that will forward all emails from the real account to the attacker. The attacker can now gather information regarding the billing or invoices and wire transfers. Conduct social engineering The hacker is essentially looking for emails that contain information about any kind of payment that took place between the company and the employee. These emails will be doctored so that the attacker can request payments using this email. The altered email will be sent along with the same mail chain to avoid suspicion. The money that is transferred by the employer will now reach the attacker’s account. Collect financial reward Now the attacker can finally profit off the scam. However, in the majority of these cases, the payments that take place do not undergo verification since the employer sees the same mail chain and thinks nothing of it. How can a security team detect a Business Email Compromise Scam attack? Detecting a security breach or, better yet, a phishing email is a best-case scenario in this case. Implementing a proper security policy should be at the forefront of a security team’s efforts. A typical detection process against BEC attacks should include a series of scanning facilities or software that carries out the following: Monitoring: These facilities will provide visibility into the overall activity of the user depending on what email platform they use. This is especially useful for enterprises that deploy on a cloud. Alerts: The software or technology that is used should send out alerts to the security team when there is a login detected. In addition to this, the software can send alerts when there is an alteration in the browser in which the login took place. Audits: Regular audits will ensure that all phishing emails are removed from the inbox. The audits can be automated or manual as well. Redirects and Forwards: Emails can also be checked safely to see if the links are redirecting users to external domains. This will secure all the possible channels that hackers may utilize. Preventive measures that security teams can implement The detection of a BEC scam is only one aspect of the cybersecurity policy that enterprises can implement. In addition to

Understanding the Implications of Business Email Compromise Scams Read More »

story telling

Story Telling Future: Improvement & Innovation

Story Telling Future: Improvement & Innovation Jeff Caldwell – Vice President, Digital Integration & Cloud Partners Animation, live action, scripted TV, reality programs, music, and other content creation efforts all center on traditional development, pre-production, production, post-production and distribution activities. Enjoyment, and monetization occur at the end of the process. Over many decades major innovation came in terms of color, sound, cameras, and digitalization. The recent pandemic has accelerated workflow improvement in the areas of cloud compute, storage, and production teams working at home and across the world. But there is more to come on both the innovation and improvement fronts. Improvement will continue to come related to industry anthologies, platform adoption, security advances, cost effective data storage, remote compute, file movement, content transmission, and other necessary creation anywhere cornerstones. But more importantly is how will technology innovation change the way stories and music are created. It’s not enough to simply take old production processes and place them in the cloud.That is just improvement.With all the digital innovation technology capabilities at our fingertips we must followSteve Jobs mandate: “Think Different” Creative Process & Improvement Improvement – Make something that already exists better Let’s take a look at storytelling as part of the overall media creation and consumption process. Songwriters have an idea, often they combine music and words to turn this idea into a story or convey a message and the listener feels an emotion and likes or dislikes. Typically, a movie, tv show, or video comes from an idea that is greenlit, and then teams of creators and talent are assembled, and the final product is created and distributed to the viewer and fan. Very much a creative process where there is a starting point to the process that comes to an end before consumption, revenue, and enjoyment begins. This traditional process allows for little or no interaction from the creators to the consumer until the magical distribution barrier is reached. Our industry has been focused on taking the age-old creation workflow process and adding technology for the most part to improve existing processes, BUT NOT INNOVATING the underlying story telling future in creative process. Color was innovativeSound was innovativeBoth of these capabilities were new and had never been done before Today’s improvement in our creative industry deals with some of the following: what cloud provider or providers should the industry use, what editing toolset(s) are best, what VDI technology is the best to reduce latency, how do we implement better security, how can files be compressed and moved around the globe, how much security is enough, how can AI be added to monitor the process, how can production costs decrease? Story telling future Creative Process & Innovation Innovation – Make something new How we can use technology to drive innovation, not just improvement? Let suppose we want to include the viewer/consumer into the creative process and move media enjoyment from the end of the process to be part of the process. Let’s move enjoyment from a sort of passive experience to an interactive creation experience. Along the way we can make storytelling not a one size fits all but more of a tailored viewer experience. (Also create new revenue streams.) Didn’t like the way Game of Thrones ended?Don’t worry you can make your own ending or endings! What would an innovative interactive workflow look like? Maybe we need an Interactive Media Creation Platform, IMCP. (We have to have a three- or four-letter acronym… it’s part of the technology business) Here is how it would work. First a basic story is conceived. Take for instance a western story set in a galaxy far far away. The storyline is established production and talent teams are assembled and product begins. Consumers can subscribe to the “dailies” via the IMCP, feedback can be discussed in Facebook/Zoom type chat rooms around the world, charactors and scenes could be created or changed in minutes based upon CGI technology, alternative storylines could be conceived, new live action scenes could be shot the next day, the direction and production staff could take this input into consideration and proceed with the original storyline or move the direction of the content to new horizons based upon the interaction. New content could also be localized based upon global input. Another way the IMCP could be used is at the end of the creation process. All of the artifacts and simits of the production that end up on the virtual cutting room floor could be placed within the editing section of the IMCP and allow the consumer to make their own movies, or content. The user and fan community could create a new partial movie, short Quibi type content, or full-length movie based upon the original extras, CGI created video, and then publish it within part of the IMCP for friends and the public at large. Fees could be charged for this engagement process, and also to view the new content. Revenue could go back to the original content rights owners. All of this interaction would be based upon technology and tools we have at our fingertips such as: social media interactive sites, group collaboration room technology, cloud-based content editing and publishing tools, and common device, mouse, swiping, and typing skills. So, the challenge has been thrown down for innovation over improvement. Who will join this quest? Story telling Future – Innovation~ 1910’s – first color filmOctober 6, 1927 – first feature film with soundJuly 4, 2022 first movie created using IMCP technology IMCP… coming soon to a desktop, tablet, or phone near you   Jeff Caldwell is the Vice President of Digital Integration and Cloud Partners with ATMECS. Jeff is well known in the media and entertainment industry, and the technology realm in general, as a business professional with an innovative vision who also has the expertise to make the vision a reality. Currently Jeff is focused on helping organizations achieve five key digital business goals: enterprise efficiency, advanced industry analytics, customer and social engagement, business acceleration and

Story Telling Future: Improvement & Innovation Read More »

Why is writing a blog worth it?

Why is writing a blog worth it? Tushar Nayak – ATMECS Content Team ‘Thoughts’ constitute 95% of day-to-day life while a conscious stream of thoughts takes up to 90% out of that 95% and the remaining 5% comes right in between the hours of sleep when the brain goes into the resting state; repairs, re-wires and refreshes. Studies show that our mind never truly rests even when we sleep and to mentally compose yourself to sit down, gather your thoughts and put them on a piece of paper, is quite frankly, daunting. Neurological studies have also shown that humans, by the end of their life journey, have only used 7-10% of their cerebral capacity. Dolphins are the perfect example of what humans can do if they could use 20% of their brain; imagine communicating with each other through sonar. For now, let us not get into the fact-finding of ‘what if’ we used 100% of our brain and let us look at what amazing things we can do with just 7% of cerebral capacity. I read a book sometime back that said, ‘writing has a similar effect to our brain as monks have during their meditative state’. The spikes shown during neurological experiments conducted on both writers and monks have shown results in the same neighborhood. In simple terms, writing puts your brain in a conscious flow of meditative state. Honestly, it would be amazing to see many of the readers pick up writing as a part of their daily chores and in the process, find a way to let off some steam from their stress-filled days. My intent here is simple, to nudge you a little – to push you to take up writing a blog. Here’s a few pointers I picked up reflecting on my own blog experience – 1. Feel good about yourself – Give yourself a pat on your back for taking the initiative of writing your first blog (maybe second or third). 2. Pick an idea or a topic – Pick a topic around your area of interest, something you are passionate about. If you are confused, then ask your supervisor to help you with an idea that can be fronted on a corporate website. 3. Research and multiply – Curiosity is not everything and your writing can only be as strong as your research on the subject, it should add value and give readers a different outlook. 4. Shoot some bullets – While your brain is busy churning on an idea, use that time to narrow down on bullet points that you think will be important. 5. Naturally and emotionally – When you pick a topic, think of yourself as a reader and ask yourself one question – “Would you read something that is devoid of the writer’s emotion?” – Write naturally and fill it up with emotions. 6. Hook your readers – Do not get into the habit of approaching an idea broadly but think of a clear angle. Always think and write a strong opening statement, back it up with facts, figures, and studies. 7. Structure – Match your article with those bullet points to make sure you have infused all the points, read the entire piece as a reader to see whether it free flows, make necessary changes if you think something does not fit and finally, critique yourself as a reader before sharing it for proofread, suggestions and advice. 8. Closure – Close the article with your natural viewpoint; value add, if possible. If you strongly disagree with something, be gentle while offering a negative bias. 9. Every artist seeks credit (yourself included) – The best ways to keep your readers engaged is to ask them to respond. Keep the forum open for people to share their two cents and do not forget to ask them to share it on social media. At the end we all seek reward for our labor! 10. Momentum – Do not be too hard on yourself just because you cannot find an inspiring topic. Momentum is the key; keep at it and I am sure you will keep coming up with great ideas worth writing about. Most people never take up writing with the fear of being criticized for their writing style. I have known people in my professional career who were amazing writers and seen them abandon their passion of writing, languishing in some dark corner now, because they could not find inspiring topics amongst other reasons. It took me a few years to motivate myself to write, so I decided to reach out to experts and my supervisors on how to express myself better and within a few weeks of receiving help, I moved from mute to motivated and realized there was a common thread – “You don’t fear being criticized, you were afraid to reach out.” To end this on a positive note, I know some of the leaders at ATMECS take great pride in expressing themselves through writing. Why can’t you be one of them? Isn’t writing a blog worth your time? Author – Tushar Nayak, ATMECS Content Team

Why is writing a blog worth it? Read More »