Number lines up to 100: Number Line up to 100
Posted onFirst Math Counting and number patterns Counting on Number Lines to 120
First (1) — Math
LEARNING TARGET
 Students will be able to identify missing numbers on a number line up to 120.
 Students will be able to fill in missing numbers on a number line up to 120.
LEARNING PROGRESSION
EXTENSION SKILL
 Students will be able to understand the pattern and sequence of numbers on a number chart up to 120.
 Students will be able to use counting strategies to identify missing numbers on a number chart up to 120.
Counting and numbers — Counting on a Hundred Chart to 120
DURATION
 Introduction (5 minutes)
 Instruction (15 minutes)
 Guided Practice (15 minutes)
 Misconception Review (5 minutes)
 Independent Practice (15 minutes)
 Exit Card Formative Assessment (5 minutes)
MATERIALS
 Number line chart up to 120
 Number cards (1120)
 Whiteboard and markers
 Pencils and paper
VOCABULARY
 Number line
 Counting
 Missing numbers
TEACHING RESOURCES
CENTERS & TASK CARDS
No Centers or Task Cards Available
IEP GOAL WORKBOOKS
What is a Math IEP Objective Workbook?
 40 daily fluency assignments
 8 student selfmonitoring progress sheets with weekly goal setting
 2 baseline assessments
 8 formative assessments
 1 present level of performance selfgraphing data tracking sheet (Perfect for
progress reporting and IEP meeting)  Teacher answer keys
WORKSHEET PACK
Included printable worksheets
 Guided Practice
 Independent Practice
 Homework
 Exit Ticket I
 Exit Ticket II
 Progress Monitoring I
 Progress Monitoring II
 Assessment
5 AND 1 INTERVENTIONS
No Interventions Available
Games can be used as a reward, as an introduction to a concept, or for independent practice.
 Review the concept of counting on a number line and the placement of numbers on the line.
 Introduce the idea of missing numbers on a number line and why it is important to be able to identify them.
 Model how to find a missing number on the number line, using a few examples.
 Guide students through filling in missing numbers on the number line together.
 Guide students through filling in missing numbers on the number line together. Guided Practice (15 minutes):
 Hand out number cards to each student and have them practice identifying the missing numbers on the number line.
 Students can use the whiteboard or paper to write down the missing numbers.
 Have students work independently or in pairs to fill in the missing numbers on a number line.
 Encourage them to use the number cards to help them with their counting.
 Assign a few problems to be completed at home, using the number line chart provided. Progress Monitoring Formative Assessment (10 minutes):
 Check student progress and understanding by reviewing their homework.
 Hand out a small number line chart and have students fill in the missing numbers.
 Collect the exit cards to assess understanding.
 Exit Card Formative Assessment
 Progress Monitoring Formative Assessment
 Summative Assessment 10 question worksheet 8/10 for mastery
 Review the concept of finding missing numbers on a number line and how it can be useful in solving math problems.
 Challenge advanced students by having them fill in missing numbers on a number line up to 1000.
 Have students create their own number lines and practice identifying missing numbers on them.
 Provide additional support for students who are struggling with counting by 10s or skip counting.
 Use manipulatives, such as counting cubes or number cards, to help students visualize the number line and practice counting.


Skipping numbers: Students may skip numbers while counting or skip counting, leading to errors in identifying missing numbers on the number line.

Counting by ones: Students may try to count by ones to identify the missing number, which can be timeconsuming and confusing, especially for larger numbers.

Confusing the placement of numbers: Students may have difficulty placing numbers correctly on the number line, leading to errors in identifying missing numbers.

Not recognizing the pattern: Students may not recognize the pattern of the numbers on the number line and therefore have difficulty identifying the missing number.

Difficulty with place value: Students may struggle with understanding place value and the significance of the position of the digits in a number, leading to errors in identifying the missing number.







ACTIVITIES
No Activities Available
LESSON INSTRUCTION
INTRODUCTION
INSTRUCTION
GUIDED PRACTICE
INDEPENDENT PRACTICE
HOMEWORK
EXIT TICKET
ASSESSMENT
CLOSURE
EXTENSION
INTERVENTION
VIDEOS
No Video Available
TEACHING TIPS
Use concrete examples and handson activities to help students understand the concept of missing numbers on a number line.
STUDENT MISCONCEPTIONS
Common misconceptions when identifying missing numbers on a number line up to 120 include:
STANDARD
Common Core Standard:
1.NBT.A.1 — Count to 120, starting at any number less than 120. In this range, read and write numerals and represent a number of objects with a written numeral.
Number Lines and Drywipe Pens
All our numberlines are doublesided and include a FREE drywipe pen. They’re incredibly longlasting, made from durable PVC unlike some other boards which are made from laminated card, which can peel. You can even wash our boards in hot water! Each number line is 30cm long and 7cm high.
Number Line 0 — 10
A versatile, doublesided drywipe number line. This board has 010 markings, with individual highlighted units. The reverse side has useful blank ruled markings.
Made from durable PVC. All number lines include a FREE drywipe pen.
Product Code: NL10
Price: £1.10
Number Line 0 — 10 (Pack of 10)
A pack of 10 Number Lines 0 — 10. A versatile, doublesided drywipe number line. This board has 010 markings, with individual highlighted units. The reverse side has useful blank ruled markings.
Made from durable PVC. All number lines include a FREE drywipe pen.
Product Code: XL10
Price: £9.90
Number Line 0 — 20
A versatile, doublesided drywipe number line. This board has 0 — 20 markings, ideal for introducing double digit numbers. The reverse side has useful blank ruled markings.
Made from durable PVC. All our number lines include a FREE drywipe pen.
Product Code: NL20
Price: £1.10
Number Line 0 — 20 (Pack of 10)
A pack of 10 Number Lines 0 — 20. A versatile, doublesided drywipe number line. This board has 0 — 20 markings, ideal for introducing double digit numbers. The reverse side has useful blank ruled markings.
Made from durable PVC. All our number lines include a FREE drywipe pen.
Product Code: XL20
Price: £9.90
Number Line 0 — 100
This board has 100 markings, with highlighted units of 10. The reverse side has useful blank ruled markings.
All number lines include a FREE drywipe pen.
Product Code: NL100
Price: £1.10
Number Line 0 — 100 (Pack of 10)
A pack of 10 number lines. A versatile, doublesided drywipe number line. This board has 100 markings, with highlighted units of 10. The reverse side has useful blank ruled markings.
Made from durable PVC. All number lines include a FREE drywipe pen.
Product Code: XL100
Price: £9.90
We’ve got great number lines to 20, number lines to 10, and number lines to 100. They are all doublesided printed and very longlasting. These are fantastic number lines for kids.
These number lines are ideal for addition and subtraction. Our maths number lines are brilliantly popular and perfect for every school bookbag.
Every number line has rounded corners so there are no sharp edges.
Great for KS1, KS2, KS3 and KS4 in both Primary and Secondary schools. Also brilliant for using number lines at home.
Performance of Deductor: data files (up to 10 million lines)
At the final stage of testing, the speed of import/export in Deductor 5.3 was analyzed for the remaining file data sources: txt, csv, ddf.
Testing was carried out for datasets with the number of records from 2 million to 10 million with a step of 2 million.
During testing, about 1 thousand runs were performed. The total execution time of the testing process of this stage was a little more than 126 hours of continuous program operation.
Import/export time
As an example, let’s take the average import/export time versus the number of records in a file. The number of fields is fixed =8.
Presenting the data obtained by execution time for all tests performed will require displaying about 60 graphs. For this reason, the results obtained are reduced to the speed index, the use of which allows us to present the results of all measurements in the most compact form.
Import/export speed
Due to the limit on the maximum number of lines for xls and xlsx files, as well as the limited size of dbf files (2 GB), these files will not be analyzed further. Thus, here is an analysis of the following file sources: csv, txt and ddf.
Analysis shows that ddf files still have the highest performance (Figures 3 and 4).
Among the remaining two file sources, csv files can be distinguished, which are slightly faster than txt files, which turned out to be the least “fast”.
Figure 5 shows that the export speed of the presented file sources (for large amounts of data) is lower than the import speed
File sizes
Based on the results of the analysis, it can be concluded that the most compact storage format for numeric and Date data types is ddf files, while csv files show the best results in the case of string formats.
As an example, the table (Table 1) presents data on the sizes (in Mb) of source files depending on the data format. The data is presented for files containing 8 columns and 10 million rows.
Data format  File format  

4.txt  5.csv  6.ddf  
1. Integers  1,239.78  764.00  686.65 
2. Fractional numbers  1,239.78  916.57  686.65 
3.Date  1,239.78  848.77  686.65 
4. Rows: with repetitions  1,926.42  658.80  1,789.86 
5.Lines: 50 random letters  3,910.07  3,900.53  4,997.25 
Table 1. File sizes in MB (10 million records, 8 columns).
Figure 7 shows the dependence of the file size on the number of records for a fixed number of fields (8 fields). Graphs are obtained by aggregating indicators for all presented data types.
As can be seen from the graphs, the size of the files depends linearly on the number of records in the file. txt files show the highest file size growth rate, and csv files show the lowest.
Pandas library and work with tables
Pandas is the main Python library for data analysis. It is fast and powerful: it can work with tables with millions of rows. Together with Maria Zharova, project mentor on the Data Science course, we talk about the commands that will allow you to start working with real data.
Development Environment
Pandas works in both IDEs (development environments) and cloudbased programming notebooks. How to install a library in a specific IDE, read here. For example, we will work in the Google Colab cloud environment. It is convenient because you do not need to install anything on your computer: you can download and work with files online, and there is also a joint mode for working with colleagues. We wrote about Colab in this review.
Take the test and find out what kind of data analyst you are and what opportunities await you. Link at the end of the article.
Data analysis in Pandas
A screen with available notebooks immediately appears on the Google Colab website. Let’s create a new notebook:
Importing the
Pandas library is not available in Python by default. To get started with it, you need to import it using this code: import pandas as pd
pd is a common abbreviation for the library. Hereafter, we will refer to it in this way.
Data download
We will use the 2019 World Happiness Report as a training dataset. You can open it in two ways.
1. Load into session storage:
And read using the following command: df = pd.read_csv('WHR_2019.csv')
2. Create an object of type DataFrame manually, for example, if available several lists and you need to combine them into one table or if you want to visualize a small data set.
This can be done through the dictionary and through the transformation of nested lists (actually tables).
Dictionary: my_df = pd.DataFrame({'id': [1, 2, 3], 'name': ['Bob', 'Alice', 'Scott'], 'age': [21, 15, 30]})
Through nested lists: df = pd.DataFrame([[1,'Bob', 21], [2,'Alice', 15], [3,'Scott', 30] ], columns = ['id','name', 'age'])
The results will be equivalent.
View data
Uploaded file converted to frame and now stored in variable df . Let’s see what it looks like using the . head() , which outputs the first five lines by default: df.head()
If you need to look at a different number of lines, it is indicated in brackets, for example df.head(12) . The last lines of the frame are output by the . tail().
Also, to simply display the dataset nicely, use the display() function . By default in Jupyter Notebook, if you write the variable name on the last line of any cell (even without the keyword display ), its contents will be displayed.
display(df) #equivalent to the df command if it is the last row of a cell
Dataset dimensions
The number of rows and columns in a dataframe can be found using the . shape : df.shape #show dimensions along two axes at once df.shape[0] #horizontal size  that is, the number of rows df.shape[1] #horizontal size  that is, the number of columns
Renaming columns
Column names can be renamed using the command rename ‘Points’, ‘GDP per capita’:’GDP per capita’, ‘Social support’:’Social support’, ‘Healthy life expectancy’:’Healthy life expectancy’, ‘Freedom to make life choices’:’Freedom life choices’, ‘Generosity’:’Generosity’, ‘Perceptions of corruption’:’Perceptions of corruption’}, inplace = True) df.head()
Characteristics of the dataset
To get an initial idea of the statistical characteristics of our dataset, this command is enough: df.describe() for each column.
Another command shows different help: how many values are in each column (in our case there are no missing values in columns) and data format: df.info()
Working with individual columns or rows
There are several ways to select multiple columns.
1. Slice the frame df[['Ranking', 'Healthy life expectancy']]
The slice can be stored in a new variable: data_new = df[['Ranking', ' Healthy life expectancy']]
You can now perform any action on this reduced frame.
2. Use the loc method
If there are a lot of columns, you can use the loc method, which looks for values by their name: df.loc [:, 'Ranking':'Social support']
In this case, we left all columns from Places in the ranking to Social support .
3. Use method iloc
If you need to cut rows and columns at the same time, you can do this using method iloc : df. iloc[0:100, 0:5]
The first parameter shows the row indices that will remain, the second the column indices. We get this frame:
In method iloc the values at the right end are excluded, so the last row we see is 99.
a separate list using the tolist() method. This will make things easier if you want to extract data from columns: df['Points'].tolist()
It is often necessary to get the column names of a dataframe as a list. This can also be done using the tolist() method : df.columns.tolist()
Adding new rows and columns
New columns can be added to the original dataset, creating new "features", as they say in machine learning . For example, let's create a "Sum" column, in which we will sum the values of the "GDP per capita" and "Social support" columns (we will do this for educational purposes, in practice, summing these indicators does not make sense): df['Amount'] = df['GDP per capita'] + df['Social support']
You can add new lines: for this you need to create a dictionary with keys  column names. If you do not specify values in some columns, they will default to empty values NaN . Add another country called Country: new_row = {'Ranking': 100, 'Country or Region': 'Country', 'Points': 100} df = df.append(new_row, ignore_index=True)
Important: when adding a new line using the .append() method do not forget to specify the parameter ignore_index=True , otherwise an error will occur.
Sometimes it is useful to add a line with the sum, median, or arithmetic mean) of a column. This can be done with the help of aggregating ( aggregate (English)  group, unite) functions: sum() , mean() , median() . For example, add a line at the end with the sums of values for each column: df = df.append(df.sum(axis=0), ignore_index = True)
Deleting rows and columns
You can delete individual columns using the drop() method number of columns. df = df.drop(['Sum'], axis = 1)
In other cases, it's better to use the slices described above.
Please note that this method requires additional saving by assigning the dataframe with the applied method to the original one. Also, you must specify 9 in the parameters.0147 axis = 1 , which shows that we are deleting a column, not a row.
Accordingly, by setting the parameter axis = 0 , you can delete any row from the dataframe: to do this, you need to write its number as the first argument in the drop() method . Let's delete the last row (specify its index  this will be the number of rows): df = df.drop(df.shape[0]1, axis = 0)
Dataframe copying
You can completely copy the original dataframe into a new variable. This is useful if you need to transform a lot of data and at the same time work not with individual columns, but with all the data:0143 df_copied = df.copy()
Unique values
Unique values in any dataframe column can be displayed using the . unique() : df['Country or region'].unique()
You can use the function len() : len(df['Country or region'].unique ())
Counting the number of values
It differs from the previous method in that it additionally counts the number of times that one or another unique value occurs in the column, written as . value_counts() : df['Country or region'].value_counts()
Data grouping
Some generalization. value_counts() is the .groupby() method — it also groups the data of any column by the same values. The difference is that with it you can not only display the number of unique elements in one column, but also find for each group the sum / average / median for any other columns.
Consider a few examples. To make them more clear, we will round all the values in the “Points” column (then it will contain values by which we can group the data):0143 df[‘Scores_new’] = round(df[‘Scores’])
1) Group the data by a new score column and count how many unique values for each group are contained in the other columns. To do this, we use .count() as an aggregating function: df.groupby('Points_new').count()
It turns out that most often countries received 6 points (there were 49):
2) We get a more meaningful result for data analysis — we calculate the sum of the values in each group. For this, instead of .count() use sum() : df.groupby('Points_new').sum()
) : df.groupby('Points_new').mean()
4) Calculate the median. To do this, we write the command median() : df.groupby('Points_new').median()
These are the most basic aggregating functions that will come in handy at the initial stage of working with data.
Here is an example of the syntax for aggregating values by group using several functions at once: df_agg = df.groupby('Points_new').agg({ 'Points_new': 'count', 'Points_new': 'sum', ' Scores_new': 'mean', 'Scores_new': 'median' })
Pivot tables
Sometimes you need to group by two parameters at once. For this, Pandas uses pivot tables or pivot_table() . They are based on dataframes, but, unlike them, you can group data not only by column values, but also by rows.
The cells of such a table contain values grouped both by the «coordinate» of the column and by the «coordinate» of the row. The corresponding aggregating function is specified as a separate parameter.
Let’s look at an example. Let’s group the average values from the «Social support» column by points in the rating and the value of GDP per capita. In the previous step, we already rounded the scores, now we will round the GDP values: df['GDP_new'] = round(df['GDP per capita'])
Now let’s make a pivot table: horizontally we place the grouped values from the rounded column «GDP» ( GDP_new) , and vertically — the rounded values from the column «Points» ( Points_new ). The cells of the table will contain the average values from the «Social support» column, grouped by these two columns at once: aggfunc = ‘mean’)
Sort data
Dataset rows can be sorted by the values of any column using the sort_values() function . By default, the method sorts in descending order. For example, let’s sort by a column of GDP per capita values: df.sort_values(by = 'GDP per capita').head()
It can be seen that the highest GDP does not guarantee a high place in the ranking.
To sort in descending order, you can use the parameter ascending (from English «ascending») = False: df.sort_values(by = 'GDP per capita', ascending=False)
Filtering
Sometimes you need to get rows that meet a certain condition; for this, «filtering» of the dataframe is used. The conditions can be very different, let’s look at several examples and their syntax:
1) Getting a string with a specific value of a column (we will output a string from the dataset for Norway): df[df['Country or region'] == 'Norway ']
2) Getting rows for which the values in some column satisfy the inequality. Let’s display rows for countries where «Healthy life expectancy» is greater than one: df[df['Healthy life expectancy'] > 1]
string work. Let’s display the dataset strings whose country names begin with the letter F — for this we will use the method .startswith() : df[df['Country or Region'].str.startswith('F')]
4) You can combine multiple conditions at the same time using logical operators. Let’s display the rows in which the value of GDP is greater than 1 and the level of social support is greater than 1.5: df[(df['GDP per capita'] > 1) & (df['Social assistance'] > 1.5)]
Thus, if there is a true expression inside the outer square brackets, then the dataset string will satisfy the filtering condition. Therefore, in other situations, you can use any functions/constructs that return values 9 in the filtering condition0147 True or False .
Applying functions to columns
Often the builtin functions and methods for dataframes from the library are not enough to perform a particular task. Then we can write our own function that would transform the dataset row as we need, and then use the . apply() method to apply this function to all rows of the desired column.
Let’s take an example: let’s write a function that converts all letters in a string to lowercase and apply it to a column of countries and regions: def my_lower(row): return row.lower() df['Country or region'].apply(lower)
Data cleansing
This is a whole step of working with data in preparation for building models and neural networks. Consider the basic techniques and functions.
1) Removing duplicates from the dataset is done using the function drop_duplucates() . By default, only completely identical rows in the entire dataset are deleted, but individual columns can also be specified in the parameters. For example, after rounding, we have duplicates in the columns «GDP_new» and «Points_new», let’s remove them: df_copied = df.copy() df_copied.drop_duplicates(subset = ['GDP_new', 'Points_new'])
This method does not require an additional assignment to the source variable so that the result is saved — therefore, we will first create a copy of our dataset in order to do not format the original.
Duplicate rows are removed completely, thus reducing their number. To replace them with empty ones, you can use the parameter inplace = True . df_copied.drop_duplicates(subset = ['GDP_new', 'Points_new'], inplace = True)
2) Fillna() function is used to replace NaN gaps with some value. For example, let’s fill in the gaps that appeared after the previous paragraph in the last line with zeros: df_copied.fillna(0)
parameter inplace = True ): df_copied.dropna()
Plotting
Pandas also provides tools for simple data visualization.
1) Regular point chart.
Let’s plot the dependence of GDP per capita on the place in the rating: df.plot(x = 'Place in the rating', y = 'GDP per capita')
2) Histogram.
Display the same relationship as a bar chart: df.plot.hist(x = 'Ranking', y = 'GDP per capita')
3) Scatter plot.