Intel Plans $20 Billion Factory in Ohio to Accelerate Domestic Chip Manufacturing

https://i.kinja-img.com/gawker-media/image/upload/c_fill,f_auto,fl_progressive,g_center,h_675,pg_1,q_80,w_1200/3787c630fdec3fa4696d761a591e5947.jpg

A rendering of Intel’s planned $20 billion microchip production facility, currently scheduled to come online in 2025.
Image: Intel

Intel will invest $20 billion in a sprawling new processor manufacturing facility outside Columbus, Ohio, according to an announcement from the tech giant early Friday. The plant, billed as the single largest private-sector investment in Ohio’s history, is expected to create 3,000 jobs, based on the company’s projections, not including at least 7,000 temporary construction jobs.

Construction on the facility is scheduled to begin in late 2022, with production coming online around 2025. Intel is billing the first two plants as just the beginning of a much larger project on the roughly 1,000 acres the company has acquired in Licking County.

Intel has also pledged $100 million in education funds for the region in an effort to create the workforce needed for the chip-making facility. The money will fund collaborative research projects with universities in the area, as well as help develop curricula specific to semiconductors for undergraduate programs.

The CEO of Intel, Pat Gelsinger, is scheduled to appear at the White House with President Joe Biden via video link to formally announce the project on Friday. The $52 billion CHIPS Act, which passed in the Senate but has stalled in the House, will likely provide funding for at least some of Intel’s long-term venture. The CHIPS Act is an attempt to spur chip-manufacturing in the U.S. to make the country less dependent on its adversary in the New Cold War, China.

“The impact of this mega-site investment will be profound,” Keyvan Esfarjani, Intel senior vice president of Manufacturing, Supply Chain and Operations, said in a statement.

G/O Media may get a commission

57% off

iMazing iOS Backup Software

Take control of your iOS backups
This software allows you to choose your backup location, export, save, and print iMessages, individual third party app data, and restore your iPhone with ease.

“A semiconductor factory is not like other factories. Building this semiconductor mega-site is akin to building a small city, which brings forth a vibrant community of supporting services and suppliers,” Esfarjani continued.

“Ohio is an ideal location for Intel’s U.S. expansion because of its access to top talent, robust existing infrastructure, and long history as a manufacturing powerhouse. The scope and pace of Intel’s expansion in Ohio, however, will depend heavily on funding from the CHIPS Act.”

Did you catch that last part? Intel will pony up for the first couple of factories, but if you really want to see a huge facility, the U.S. government will need to get the CHIPS Act done.

The company also says it expects other suppliers to pop up in the region to help support the new facility:

In addition to Intel’s presence in Ohio, the investment is expected to attract dozens of ecosystem partners and suppliers needed to provide local support for Intel’s operations – from semiconductor equipment and materials suppliers to a range of service providers. Investments made by these suppliers will not only benefit Ohio but will have a significant economic impact on the broader U.S. semiconductor ecosystem. As part of today’s announcement, Air Products, Applied Materials, LAM Research and Ultra Clean Technology have indicated plans to establish a physical presence in the region to support the buildout of the site, with more companies expected in the future.

“Today’s investment marks another significant way Intel is leading the effort to restore U.S. semiconductor manufacturing leadership,” CEO Gelsinger said in a press release.

“Intel’s actions will help build a more resilient supply chain and ensure reliable access to advanced semiconductors for years to come,” Gelsinger continued. “Intel is bringing leading capability and capacity back to the United States to strengthen the global semiconductor industry.”

“These factories will create a new epicenter for advanced chipmaking in the U.S. that will bolster Intel’s domestic lab-to-fab pipeline and strengthen Ohio’s leadership in research and high tech.”

CEO Gelsinger also plans to hold a dedicated press conference at 2:30 p.m. ET/ 11:30 a.m. PT on Friday to discuss the company’s plans for a “globally balanced” supply chain. Keyvan Esfarjani, Intel senior vice president and general manager of Manufacturing, Supply Chain and Operations will also be on the webcast, according to the company.

Gizmodo

5 best examples of eCommerce dashboards to help you take control of your business

https://www.noupe.com/wp-content/uploads/2022/01/mark-konig-Tl8mDaue_II-unsplash-1-1024×576.jpg

Being an entrepreneur means you must keep a lot of things under control. You have to always know what is going on in sales, marketing, finance, inventory, and other aspects of your business.

Moreover, you probably use a myriad of tools to run them. Switching between a CRM, marketing automation software, Google Analytics, Accounting System and your eCommerce back-end won’t provide a clear picture of how your business is doing. 

Wouldn’t it be great to have one screen showing the most important information about your business? eCommerce dashboards are here to give you this opportunity.

What is a dashboard and why do you need it?

eCommerce dashboard is a visual representation of up-to-date data (metrics and KPIs) important for the business. It helps to analyze the main indicators to improve the results. Dashboards are different from other analytics tools because they give insights on the most vital data. Thanks to dashboards you can check the vitals of your business any time you want and not just during your monthly reports.

This becomes possible because they accumulate real-time (or near real-time) data. With the help of a well-tuned dashboard, you can quickly spot and fix problems and see if your company is effective or not. It takes less time and effort to check a dashboard than to dig into monthly reports and collect info manually. 

Dashboards provide statistics on different areas of your online store, including, but not limited to monthly sales, website traffic, and marketing campaigns. You can compare data by any period of time: days, months, or even years. 

Let’s look at the most popular examples of dashboards for eCommerce.

5 eCommerce dashboards examples

We’ve prepared five examples of dashboards that will help you run your store. Keep in mind that there are no universal solutions that can fit everyone. The best solution for you depends on the complexity and scale of your business: are you going to create a marketplace or a small online store? How many product categories and markets are you going to cover? All of this will impact the types and number of KPIs you will have to track. 

Store overview

The overview dashboard shows the performance of the store across all major areas of the business like sales, marketing, inventory, and others. This type of dashboards enables you to see the most important metrics of your business on one screen. 

Goal: to get a snapshot of your store performance.

KPI examples: 

  • Total sales
  • Sales by product, marketing channel, geography, etc.
  • Traffic
  • Average order value
  • Conversion rate
  • Repeat customer rate
  • Customer Lifetime Value

Dashboard Example: 

Store overview dashboard

Google Analytics for eCommerce

Google Analytics is the number one tool for analyzing website traffic. It also provides a lot of information which makes it difficult to digest. That is why you need a Google Analytics eCommerce dashboard that will show only the most meaningful data for your online store. 

Goal: to check your website’s most current statistics.

KPI examples:

  • Unique visitors
  • Average session duration
  • Bounce rate 
  • Time on site
  • Goal completions 
  • Traffic sources
  • Top pages
  • Top keywords

Dashboard Example: 

Google Analytics dashboard

Sales

eCommerce sales dashboard shows KPIs like total sales, as well as sales by product, marketing channel, location etc. This is a functional strategic tool that allows getting an instant overview of your sales to pinpoint any problem as soon as it occurs. It organizes the most precise and recent data related to commercial success.

Goal: to see current sales performance and compare it to your sales target.

KPI examples: 

  • Total sales
  • Sales target
  • Average order value
  • Lead conversion rate
  • Up-sell and cross-sell rates
  • Profit margin per product

Dashboard Example: 

Sales dashboard

Marketing

eCommerce marketing dashboards present vital KPIs in digital marketing to track channel and campaign results in real-time. These KPIs include traffic sources, macro and micro conversions, leads, ROI of marketing channels and campaigns, etc. The information is collected from such channels: Google Analytics, email marketing tools, CRM, social media, and others. 

Goal: to see how effective your marketing is in real-time. 

KPI examples: 

  • Traffic
  • Leads
  • Sales
  • Conversion rate
  • Conversions by channel
  • Customer acquisition cost (total and by channel)

Dashboard Example: 

Marketing dashboard

Finance

This dashboard focuses on your budget and analyzes existing assets, profit and expenses, cash flow, income, and other financial indicators. It allows you to see the current figures and financial details to get helpful insights and increase the cost-efficiency of your business.

Goal: to see the current financial state and overall profitability of the business.

KPI examples:

  • Revenue
  • Net profit
  • Gross / Net profit margin
  • Cash balance
  • Working capital
  • Cost of goods sold

Dashboard Example: 

Finance dashboard

Wrapping up

To sum up, eCommerce metrics dashboards are useful tools that help track your business performance on a daily basis. They can either provide an overview of the entire business or may cover different areas from SEO to finances. 

The fundamental advantage is that you can check your KPIs any time, not just once a month when you prepare your reports. This will help to detect and fix problems almost on the spot. Another advantage is that dashboards are visual tools that allow you to digest information easily. Additionally, they are customizable and can show the indicators which you need the most.

In this article, we shed some light on what dashboards are and why they matter. We also gave examples of dashboards that might be helpful for your eCommerce business. Give these tools a try and get a clearer picture of how your store is doing.

The post 5 best examples of eCommerce dashboards to help you take control of your business appeared first on noupe.

noupe

I don’t think this meme makes the point they want it to make

https://gunfreezone.net/wp-content/uploads/2022/01/FJgyZvLX0AIDtDM.jpeg

I’ve seen this meme posted several times online:

 

The point is always the same.  It’s not fair that the Red area is represented by eight senators and the green represented by two senators.

So the conclusion they draw is that we must abolish the Senate.

Because the Left already controls the House of Representatives, they need to attack the Senate.

I look at this and come to a different conclusion.

The population density of NYC is 27,000 per square mile.  The average for the United States is 94 per square mile.

New York City is 288 times more dense as the average.

Cities like NYC are unnatural and unsustainable.

They can’t grow enough food to feed their population or have landfill space to dispose of their garbage.

Clearly before we abolish the Senate we have to abolish the megacities.

Laracon Online Winter will be free this year!

https://laravelnews.imgix.net/images/laracon-online-winter2022.jpg?ixlib=php-3.3.1

Join us on February 9th, 2022, for this year’s Laracon Online winter edition. It will be free for everyone!

For the first time ever, we’ll be streaming Laracon for free on YouTube, allowing us to reach the entire Laravel community. This is a huge moment (and experiment) for us. Thank you to the following incredible partners for helping us achieve this!

Gold Sponsors

Silver Sponsors

Community Sponsors

If your company would like to partner for the event, we still have a few spots left.

Schedule

Here is the schedule for the day, all times are EST (GMT-4):

  • 8:55 AM – Opening – Ian Landsman
  • 9:00 AM – Actions are a Dev’s Best Friend – Luke Downing
  • 9:40 AM – Modularising the Monolith – Ryuta Hamasaki
  • 10:20 AM – Digital Nomadding in the Time of COVID – Polly Washburn
  • 10:35 AM – Typing In and Out of Laravel – Craig Morris
  • 10:50 AM – Everything Flex – Shruti Balasa
  • 11:20 AM – Dealing with Criticism – Kristin Collins
  • 12:00 PM – A Little Bit More Lambda – Aaron Francis
  • 12:40 PM – Web 3.0 and Laravel – Marcel Pociot
  • 1:40 PM – Laravel Update – Taylor Otwell
  • 2:40 PM – How to do API integrations in Laravel – Steve McDougall
  • 3:20 PM – Building Awesome Blade Components With Alpine – Caleb Porzio
  • 4:30 PM – Discovering Route Discovery – Freek Van der Herten
  • 4:35 PM – The Art of Programming – Erika Heidi
  • 4:50 PM – Using Lando for local Development – Rory McDaniel
  • 5:05 PM – The Jigsaw Challenge – Zuzana Kunckova
  • 5:30 PM – Laravel for millions and some… – Ashley Hindle
  • 6:00 PM – Keep Thinking Like a Hacker – Stephen Rees-Carter

Can’t watch it live? No problem, all talks will be recorded and available online for viewing at your convenience shortly after the conference ends.

Mark your calendar for February 9th, 2022, and join the conference for free live on Youtube! For complete details, check out Laracon Online and subscribe to the YouTube channel.

PS: If you want some sweet desktop images from the art we have them here:

Laravel News

Define a Route Group Controller in Laravel 8.80

https://laravelnews.imgix.net/images/laravel8.jpg?ixlib=php-3.3.1

The Laravel team released 8.80 with the ability to define a route group controller, render a string with the Blade compiler, PHPRedis serialization and compression config support, and the latest changes in the v8.x branch.

Specify a Route Group Controller

Luke Downing contributed the ability to define a controller for a route group, meaning you don’t have to repeat which controller a route uses if the group uses the same controller:

1Route::controller(PlacementController::class)

2 ->prefix('placements')

3 ->as('placements.')

4 ->group(function () {

5 Route::get('', 'index')->name('index');

6 Route::get('/bills', 'bills')->name('bills');

7 Route::get('/bills/{bill}/invoice/pdf', 'invoice')->name('pdf.invoice');

8 });

Render a String With Blade

Jason Beggs contributed a Blade::render() method that uses the Blade compiler to convert a string of Blade templating into a rendered string:

1// Returns 'Hello, Claire'

2Blade::render('Hello, ', ['name' => 'Claire']);

3 

4// Returns 'Foo '

5Blade::render('@if($foo) Foo @else Bar @endif', ['foo' => true]);

6 

7// It even supports components :)

8// Returns 'Hello, Taylor'

9Blade::render('<x-test name="Taylor" />');

PHPRedis Serialization and Compression Config Support

Petr Levtonov contributed the ability to configure PHPRedis serialization and compression options instead of needing to overwrite the service provider or define a custom driver.

The PR introduced the following serialization options:

  • NONE
  • PHP
  • JSON
  • IGBINARY
  • MSGPACK

And the following compressor options:

These options are now documented in the Redis – Laravel documentation.

Release Notes

You can see the complete list of new features and updates below and the diff between 8.79.0 and 8.80.0 on GitHub. The following release notes are directly from the changelog:

v8.80.0

Added

  • Allow enums as entity_type in morphs (#40375)
  • Added support for specifying a route group controller (#40276)
  • Added phpredis serialization and compression config support (#40282)
  • Added a BladeCompiler::render() method to render a string with Blade (#40425)
  • Added a method to sort keys in a collection using a callback (#40458)

Changed

  • Convert “/” in -e parameter to “” in Illuminate/Foundation/Console/ListenerMakeCommand (#40383)

Fixed

  • Throws an error upon make:policy if no model class is configured (#40348)
  • Fix forwarded call with named arguments in Illuminate/Filesystem/FilesystemAdapter (#40421)
  • Fix ‘strstr’ function usage based on its signature (#40457)

Laravel News

A comprehensive guide on how to design future-proof controllers: Part 1

https://hashnode.com/utility/r?url=https%3A%2F%2Fcdn.hashnode.com%2Fres%2Fhashnode%2Fimage%2Fupload%2Fv1642053183825%2FTrLri6Qo6.jpeg%3Fw%3D1200%26h%3D630%26fit%3Dcrop%26crop%3Dentropy%26auto%3Dcompress%2Cformat%26format%3Dwebp%26fm%3Dpng

Introduction

When you are still in that learning phase with any technology, your focus is to make your app work, but I think the exponential progress starts coming when you begin to ask yourself “How can I make this better?”. One simple principle that you can immediately apply to your existing or new codebase for cleaner and more maintainable code is the Separation of Concerns principle.

Most of the server-side codebases I have come across have controllers that contain code specifying the real-world business rules on how data can be created, stored, and changed on the system. If you want to learn how to build controllers that are clean, concise, and easily maintainable, then this series is for you.

What you will learn after reading this article

  1. You will have a solid understanding of the Separation of concerns principle

  2. You will be able to identify the major steps involved in the lifecycle of a request on the server-side

  3. You will understand the role of the controller on the server side. This will ensure that the lines of code present in your controller functions are the ones that absolutely need to be in the controller

Prerequisites

  1. Understanding of client-server architecture
  2. Familiarity with model-view-controller architecture
  3. A basic understanding of Object-Oriented Programming

With all that out of the way, let’s move 🚀

Understanding Separation of concerns

What is a concern?

A concern is a section of a feature or software that handles a particular functionality in the system. A good example of a concern in a well-designed backend system is request validation, which means there is a part of the code that accepts the data coming from the client to make sure all the information is valid or at least in the right format before sending it to the other parts of the system.

What does the term Separation of concerns mean?

Since we know what a concern is, understanding the idea behind the Separation of concerns will not be difficult. Separation of concerns promotes the idea of building software in such a way that our code should be broken down into separate components or layers such that each layer handles a specific concern. An example is a feature that retrieves data from the database and then formats the data based on the client’s request. Placing both logic in the same function is a really bad idea since retrieving data from the database is a specific concern, then formatting the retrieved data is another concern.

Lifecycle of a request on the server

I have worked on building many backend systems that provide services to clients and many of them usually follow a similar pattern with 3 major steps which are mainly.

  1. Request validation: This refers to the part of your code that ensures that the data sent by a client is in a valid and acceptable format. A simple example is making sure that a value sent from the client as an email is actually a valid email address.
  2. Business logic execution: This is the section of your codebase that contains the code that enforces real-world business rules. Let’s use an app that allows a customer to transfer money from one account to another. A valid business rule is that you cannot transfer an amount that is greater than your current balance. For this app to work properly, there has to be a section of your code that compares the amount you are trying to transfer and your current balance and makes sure the business rule is obeyed. That is what we refer to as business logic.
  3. Response formatting and return:This refers to the section of your codebase responsible for making sure the data returned to the client after business logic execution is properly formatted and well presented. eg JSON, XML

Backend receives request.png

I have come across a lot of codebases that perform these three steps using a single function or method, where line 10 to 15 handles request validation, line 16 to 55 handles all the business logic with long if-else statements, loops, etc, then line 56-74 formats the response based on certain conditions, and finally, line 75 returns the data to the client. That’s about 65 lines of code in a single function! That is a ticking time bomb waiting to explode when a new engineer joins the team or when you come back to add more changes to the code.

Understanding the role of the controller in the backend request lifecycle

Imagine we have 3 tasks involving the same feature.

  1. Fix a bug in request validation
  2. Change the way data is retrieved from the database (use Eloquent ORM instead of raw queries)
  3. Add extra meta-data to the response returned to the client

If our controllers are designed in a way where each method contains all these three major steps involved in fulfilling a request on the server-side, then the flow looks something like the image below

Backend receives request 1.png

Handling these tasks become a nightmare because everyone in the team will be modifying the same function, and good luck merging all these changes without the need to resolve merge conflicts 🙄.

So, what exactly should the controller do?

The controller should serve as a delegator, meaning that the controller accepts a request with the associated data from the client, then the controller should assign the different tasks involved in fulfilling the client’s request to different parts of the codebase, then finally, the controller sends a proper response (success or failure) to the client depending on the result of the executed code.
I have illustrated this using the image below.

controller delegator.png

If our controllers are built this way, making changes will be incredibly easy, bugs will be easy to trace and the responsibility of each class and other classes it depends on to fulfill its tasks will be immediately visible even to someone looking at the code for the first time.

There is more juicy stuff to come 😋

Like I said at the beginning of the article, this is going to be a series because there is a lot of information to digest and I want to make sure you can remember a lot after reading each article. I want to separate the concerns you know 😂, so that this article does not become massive like the controllers we will be refactoring starting from the next part. If you didn’t get that joke, then maybe next time.

In the next article, Part 2. We will be refactoring a controller function with a specific focus on the Request validation concern. We will build a Request validator class and abstract all the logic involved in validating the client’s request away from the controller. I will be using Laravel in the rest of the series.

Quick Recap

  1. Separation of concerns is simply a concept that promotes the idea that you should always look at your code to identify the different functionalities involved, and think of how you can break down the code into smaller components for clarity, easy debugging, and maintenance among other benefits.
  2. All the logic involved in fulfilling a client’s request on the server-side should not be in a single location, like a controller.
  3. The controllers should serve as classes that delegate tasks to other parts of the codebase and get’s back a response from those other parts. Depending on the response received by the controller, the controller decides if the response to send back to the client should be a success or failure response.
  4. More code might be involved when trying to break down a feature into smaller components but remember, it pays off in the long run.

It’s a wrap 🎉

I sincerely hope that you have learned something, no matter how small, after reading this. If you did, kindly drop a thumbs up.

Thanks for sticking with me till the end. If you have any suggestions or feedback, kindly drop them in the comment section. Enjoy the rest of your day…bye 😊.

Laravel News Links

Pandas DataFrame Methods: drop_level(), pivot(), pivot_table(), reorder_levels(), sort_values() and sort_index()

http://img.youtube.com/vi/PMKuZoQoYE0/0.jpg

The Pandas DataFrame/Series has several methods to handle Missing Data. When applied to a DataFrame/Series, these methods evaluate and modify the missing elements.

This is Part 13 of the DataFrame methods series:

  • Part 1 focuses on the DataFrame methods abs(), all(), any(), clip(), corr(), and corrwith().
  • Part 2 focuses on the DataFrame methods count(), cov(), cummax(), cummin(), cumprod(), cumsum().
  • Part 3 focuses on the DataFrame methods describe(), diff(), eval(), kurtosis().
  • Part 4 focuses on the DataFrame methods mad(), min(), max(), mean(), median(), and mode().
  • Part 5 focuses on the DataFrame methods pct_change(), quantile(), rank(), round(), prod(), and product().
  • Part 6 focuses on the DataFrame methods add_prefix(), add_suffix(), and align().
  • Part 7 focuses on the DataFrame methods at_time(), between_time(), drop(), drop_duplicates() and duplicated().
  • Part 8 focuses on the DataFrame methods equals(), filter(), first(), last(), head(), and tail()
  • Part 9 focuses on the DataFrame methods equals(), filter(), first(), last(), head(), and tail()
  • Part 10 focuses on the DataFrame methods reset_index(), sample(), set_axis(), set_index(), take(), and truncate()
  • Part 11 focuses on the DataFrame methods backfill(), bfill(), fillna(), dropna(), and interpolate()
  • Part 12 focuses on the DataFrame methods isna(), isnull(), notna(), notnull(), pad() and replace()
  • Part 13 focuses on the DataFrame methods drop_level(), pivot(), pivot_table(), reorder_levels(), sort_values() and sort_index()

Getting Started

Remember to add the Required Starter Code to the top of each code snippet. This snippet will allow the code in this article to run error-free.

Required Starter Code

import pandas as pd
import numpy as np 

Before any data manipulation can occur, two new libraries will require installation.

  • The pandas library enables access to/from a DataFrame.
  • The numpy library supports multi-dimensional arrays and matrices in addition to a collection of mathematical functions.

To install these libraries, navigate to an IDE terminal. At the command prompt ($), execute the code below. For the terminal used in this example, the command prompt is a dollar sign ($). Your terminal prompt may be different.

$ pip install pandas

Hit the <Enter> key on the keyboard to start the installation process.

$ pip install numpy

Hit the <Enter> key on the keyboard to start the installation process.

Feel free to check out the correct ways of installing those libraries here:

If the installations were successful, a message displays in the terminal indicating the same.

DataFrame drop_level()

The drop_level() method removes the specified index or column from a DataFrame/Series. This method returns a DataFrame/Series with the said level/column removed.

The syntax for this method is as follows:

DataFrame.droplevel(level, axis=0)
Parameter Description
level If the level is a string, this level must exist. If a list, the elements must exist and be a level name/position of the index.
axis If zero (0) or index is selected, apply to each column. Default is 0 (column). If zero (1) or columns, apply to each row.

For this example, we generate random stock prices and then drop (remove) level Stock-B from the DataFrame.

nums = np.random.uniform(low=0.5, high=13.3, size=(3,4))
df_stocks = pd.DataFrame(nums).set_index([0, 1]).rename_axis(['Stock-A', 'Stock-B'])
print(df_stocks)

result = df_stocks.droplevel('Stock-B')
print(result)
  • Line [1] generates random numbers for three (3) lists within the specified range. Each list contains four (4) elements (size=3,4). The output saves to nums.
  • Line [2] creates a DataFrame, sets the index, and renames the axis. This output saves to df_stocks.
  • Line [3] outputs the DataFrame to the terminal.
  • Line [4] drops (removes) Stock-B from the DataFrame and saves it to the result variable.
  • Line [5] outputs the result to the terminal.

Output:

df_stocks

    2 3
Stock-A Stock-B    
12.327710 10.862572   7.105198  8.295885
11.474872 1.563040    5.915501  6.102915

result

  2 3
Stock-A    
12.327710 7.105198  8.295885
11.474872 5.915501  6.102915

DataFrame pivot()

The pivot() method reshapes a DataFrame/Series and produces/returns a pivot table based on column values.

The syntax for this method is as follows:

DataFrame.pivot(index=None, columns=None, values=None)
Parameter Description
index This parameter can be a string, object, or a list of strings and is optional. This option makes up the new DataFrame/Series index. If None, the existing index is selected.
columns This parameter can be a string, object, or a list of strings and is optional. Makes up the new DataFrame/Series column(s).
values This parameter can be a string, object, or a list of the previous and is optional.

For this example, we generate 3-day sample stock prices for Rivers Clothing. The column headings display the following characters.

  • A (for Opening Price)
  • B (for Midday Price)
  • C (for Opening Price)
cdate_idx = ['01/15/2022', '01/16/2022', '01/17/2022'] * 3
group_lst = list('AAABBBCCC')
vals_lst  = np.random.uniform(low=0.5, high=13.3, size=(9))

df = pd.DataFrame({'dates':  cdate_idx,
                                    'group':  group_lst,
                                   'value':  vals_lst})
print(df)

result = df.pivot(index='dates', columns='group', values='value')
print(result)
  • Line [1] creates a list of dates and multiplies this by three (3). The output is three (3) entries for each date. This output saves to cdate_idx.
  • Line [2] creates a list of headings for the columns (see above for definitions). Three (3) of each character are required (9 characters). This output saves to group_lst.
  • Line [3] uses np.random.uniform to create a random list of nine (9) numbers between the set range. The output saves to vals_lst.
  • Line [4] creates a DataFrame using all the variables created on lines [1-3]. The output saves to df.
  • Line [5] outputs the DataFrame to the terminal.
  • Line [6] creates a pivot from the DataFrame and groups the data by dates. The output saves to result.
  • Line [7] outputs the result to the terminal.

Output:

df

  dates group value
0 01/15/2022 A 9.627767
1 01/16/2022     A 11.528057
2 01/17/2022     A 13.296501
3 01/15/2022 B 2.933748
4 01/16/2022     B 2.236752
5 01/17/2022     B 7.652414
6 01/15/2022 C 11.813549
7 01/16/2022     C 11.015920
8 01/17/2022     C 0.527554

result

group A B C
dates      
01/15/2022   8.051752  9.571285   6.196394
01/16/2022  6.511448  8.158878  12.865944
01/17/2022  8.421245  1.746941  12.896975

DataFrame pivot_table()

The pivot_table() method streamlines a DataFrame to contain only specific data (columns). For example, say we have a list of countries with associated details. We only want to display one or two columns. This method can accomplish this task.

The syntax for this method is as follows:

DataFrame.pivot_table(values=None, index=None, columns=None, aggfunc='mean', fill_value=None, margins=False, dropna=True, margins_name='All', observed=False, sort=True)
Parameter Description
values This parameter is the column to aggregate and is optional.
index If the parameter is an array, it must be the same length as the data. It may contain any other data types (but not a list).
columns If an array, it must be the same length as the data. It may contain any other data types (but not a list).
aggfunc This parameter can be a list of functions. These name(s) will display at the top of the relevant column names (see Example 2).
fill_value This parameter is the value used to replace missing values in the table after the aggregation has occurred.
margins If set to True, this parameter will add the row/column data to create subtotal(s) or total(s). False, by default.
dropna This parameter will not include any columns where the value(s) are NaN. True by default.
margins_name This parameter is the name of the row/column containing the totals if margins parameter is True.
observed If True, display observed values. If False, display all observed values.
sort By default, sort is True. The values automatically sort. If False, no sort is applied.

For this example, a comma-delimited CSV file is read in. A pivot table is created based on selected parameters.

Code – Example 1:

df = pd.read_csv('countries.csv')
df = df.head(5)
print(df)

result = pd.pivot_table(df, values='Population', columns='Capital')
print(result)
  • Line [1] reads in a CSV file and saves to a DataFrame (df).
  • Line [2] saves the first five (5) rows of the CSV file to df (over-writing df).
  • Line [3] outputs the DataFrame to the terminal.
  • Line [4] creates a pivot table from the DataFrame based on the Population and Capital columns. The output saves to result.
  • Line [5] outputs the result to the terminal.

Output:

df

  Country Capital Population Area
0 Germany Berlin    83783942  357021
1 France   Paris    67081000  551695
2 Spain  Madrid    47431256  498511
3 Italy    Rome    60317116  301338
4 Poland  Warsaw    38383000  312685

result

Capital Berlin Madrid Paris Rome Warsaw
Population 83783942  47431256  67081000  60317116  38383000

For this example, a comma-delimited CSV file is read in. A pivot table is created based on selected parameters. Notice the max function.

Code – Example 2

df = pd.read_csv('countries.csv')
df = df.head(5)

result = pd.pivot_table(df, values='Population', columns='Capital', aggfunc=[max])
print(result)
  • Line [1] reads in a comma-separated CSV file and saves to a DataFrame (df).
  • Line [2] saves the first five (5) rows of the CSV file to df (over-writing df).
  • Line [3] creates a pivot table from the DataFrame based on the Population and Capital columns. The max population is a parameter of aggfunc. The output saves to result.
  • Line [4] outputs the result to the terminal.

Output:

result

  max        
Capital Berlin Madrid Paris Rome Warsaw
Population 83783942  47431256  67081000  60317116  38383000

DataFrame reorder_levels()

The reorder_levels() method re-arranges the index of a DataFrame/Series. This method can not contain any duplicate level(s) or drop level(s).

The syntax for this method is as follows:

DataFrame.reorder_levels(order, axis=0)
Parameter Description
order This parameter is a list containing the new order levels. These levels can be a position or a label.
axis If zero (0) or index is selected, apply to each column. Default is 0 (column). If zero (1) or columns, apply to each row.

For this example, there are five (5) students. Each student has some associated data with it. Grades generate by using np.random.randint().

index = [(1001, 'Micah Smith', 14), (1001, 'Philip Jones', 15), 
         	(1002, 'Ben Grimes', 16), (1002, 'Alicia Heath', 17), (1002, 'Arch Nelson', 18)]
m_index = pd.MultiIndex.from_tuples(index)
grades_lst = np.random.randint(45,100,size=5)
df = pd.DataFrame({"Grades": grades_lst}, index=m_index)
print(df)

result = df.reorder_levels([1,2,0])
print(result)
  • Line [1] creates a List of tuples. Each tuple contains three (3) values. The output saves to index.
  • Line [2] creates a MultiIndex from the List of Tuples created on line [1] and saves to m_index.
  • Line [3] generates five (5) random grades between the specified range and saves to grades_lst.
  • Line [4] creates a DataFrame from the variables on lines [1-3] and saves to df.
  • Line [5] outputs the DataFrame to the terminal.
  • Line [6] re-orders the levels as specified. The output saves to result.
  • Line [7] outputs the result to the terminal.

Output:

df

      Grades
1001 Micah Smith 14 52
  Philip Jones 15 65
1002 Ben Grimes 16 83
  Alicia Heath 17 99
  Arch Nelson  18 78

result

      Grades
Micah Smith 14 1001 52
Philip Jones 15 1001 65
Ben Grimes 16 1002 83
Alicia Heath 17 1002 99
Arch Nelson  18 1002 78

DataFrame sort_values()

The sort_values() method sorts (re-arranges) the elements of a DataFrame.

The syntax for this method is as follows:

DataFrame.sort_values(by, axis=0, ascending=True, inplace=False, kind='quicksort', na_position='last', ignore_index=False, key=None)
Parameter Description
by This parameter is a string or a list of strings. These comprise the index levels/columns to sort. Dependent on the selected axis.
axis If zero (0) or index is selected, apply to each column. Default is 0 (column). If zero (1) or columns, apply to each row.
ascending By default, True. Sort is conducted in ascending order. If False, descending order.
inplace If False, create a copy of the object. If True, the original object updates. By default, False.
kind Available options are quicksort, mergesort, heapsort, or stable. By default, quicksort. See numpy.sort for additional details.
na_position Available options are first and last (default). If the option is first, all NaN values move to the beginning, last to the end.
ignore_index If True, the axis numbering is 0, 1, 2, etc. By default, False.
key This parameter applies the function to the values before a sort. The data must be in a Series format and applies to each column.

For this example, a comma-delimited CSV file is read in. This DataFrame sorts on the Capital column in descending order.

df = pd.read_csv('countries.csv')
result = df.sort_values(by=['Capital'], ascending=False)
print(result)
  • Line [1] reads in a comma-delimited CSV file and saves to df.
  • Line [2] sorts the DataFrame on the Capital column in descending order. The output saves to result.
  • Line [3] outputs the result to the terminal.

Output:

  Country Capital Population Area
6 USA  Washington   328239523   9833520
4 Poland      Warsaw    38383000    312685
3 Italy        Rome    60317116    301338
1 France       Paris    67081000    551695
5 Russia      Moscow   146748590  17098246
2 Spain      Madrid    47431256    498511
8 India       Dheli  1352642280   3287263
0 Germany Berlin    83783942    357021
7 India Beijing  1400050000   9596961

DataFrame sort_index()

The sort_index() method sorts the DataFrame.

The syntax for this method is as follows:

DataFrame.sort_index(axis=0, level=None, ascending=True, inplace=False, kind='quicksort', na_position='last', sort_remaining=True, ignore_index=False, key=None)
Parameter Description
axis If zero (0) or index is selected, apply to each column. Default is 0 (column). If zero (1) or columns, apply to each row.
level This parameter is an integer, level name, or a list of integers/level name(s). If not empty, a sort is performed on values in the selected index level(s).
ascending By default, True. Sort is conducted in ascending order. If False, descending order.
inplace If False, create a copy of the object. If True, the original object updates. By default, False.
kind Available options are quicksort, mergesort, heapsort, or stable. By default, quicksort. See numpy.sort for additional details.
na_position Available options are first and last (default). If the option is first, all NaN values move to the beginning, last to the end.
ignore_index If True, the axis numbering is 0, 1, 2, etc. By default, False.
key This parameter applies the function to the values before a sort. The data must be in a Series format and applies to each column.

For this example, a comma-delimited CSV file is read into a DataFrame. This DataFrame sorts on the index Country column.

df = pd.read_csv('countries.csv')
df = df.set_index('Country')
result = df.sort_index()
print(result)
  • Line [1] reads in a comma-delimited CSV file and saves to df.
  • Line [2] sets the index of the DataFrame to Country. The output saves to df (over-writing original df).
  • Line [3] sorts the DataFrame (df) on the indexed column (Country) in ascending order (default). The output saves to result.
  • Line [4] outputs the result to the terminal.

Output:

  Country Population Area
China Beijing  1400050000   9596961
France Paris    67081000    551695
Germany Berlin    83783942    357021
India Dheli  1352642280   3287263
Italy Rome    60317116    301338
Poland Warsaw    38383000    312685
Russia Moscow   146748590  17098246
Spain Madrid    47431256    498511
USA Washington   328239523   9833520

Finxter

Our Flag Means Death (Teaser)

https://theawesomer.com/photos/2022/01/our_flag_means_death_t.jpg

Our Flag Means Death (Teaser)

Link

Rhys Darby (Murray from Flight of the Conchords) stars in this high-seas comedy adventure series about a wealthy man who abandons his life of privilege to become a pirate. Taika Waititi, the busiest man in Hollywood, does double duty as Executive Producer and performs as Blackbeard. Premieres 3.2022 on HBO Max.

The Awesomer

Amazon reveals title and trailer for new ‘Lord of the Rings’ series coming to Prime Video

http://img.youtube.com/vi/uEepEyrHmtE/0.jpg

Are you ready to head back to Middle Earth? Amazon Studios revealed the title and trailer Wednesday for its highly anticipated prequel to the “Lord of the Rings” series, called “The Lord of the Rings: The Rings of Power.”

The series will debut on Prime Video on Sept. 2.

“The Rings of Power” is set in the Second Age of Middle Earth, thousands of years before the events of J.R.R. Tolkien’s “The Hobbit” and “The Lord of the Rings.”

The series “will take viewers back to an era in which great powers were forged, kingdoms rose to glory and fell to ruin, unlikely heroes were tested, hope hung by the finest of threads, and the greatest villain that ever flowed from Tolkien’s pen threatened to cover all the world in darkness,” Amazon said in its YouTube description for the trailer.

Amazon founder Jeff Bezos tweeted an image of himself holding a big slab of wood with the series title on it. “Can’t wait for you to see it,” he wrote.

IGN has behind-the-scenes details on how the title sequence was created, and it wasn’t with CGI, but rather with molten metal and a “hunk of reclaimed redwood.”

Amazon first announced that it had acquired the rights to adapt Tolkien’s work in 2017.

“’The Lord of the Rings’ is a cultural phenomenon that has captured the imagination of generations of fans through literature and the big screen,” Sharon Tal Yguado, head of Scripted Series for Amazon Studios, said in a statement at the time. 

Tolkien’s book series was named Amazon customers’ favorite book of the millennium in 1999. Director Peter Jackson’s theatrical adaptations included “The Fellowship of the Ring” (2001); “The Two Towers” (2002); and “The Return of the King” (2003). The films grossed nearly $6 billion worldwide and won a combined 17 Academy Awards, including Best Picture for “King.”

GeekWire