Technological Formation Scale for Teachers (TFS): Development and Validation

Accepted: 11.12.2020 The aim of this study is to develop a valid and reliable Technological Formation Scale to measure Technological Knowledge (TK) and Technological Pedagogical Knowledge (TPK) which are main components of Technological Pedagogical Content Knowledge (TPACK). Technology seems to have an important place in teacher training and education. It is seen that only Pedagogical Content Knowledge (PCK) is not sufficient today. Therefore, the measurement tools including 21st century requirements which are the information and communication technologies skills, production and productive thinking are needed. Measuring technological and technological pedagogical knowledge of the teachers and teacher candidates is able to be possible by means of this measurement tool. Another aim of this study is to present a measurement tool within the TPACK framework, which is based on productivity for the researchers. The components of Technological Knowledge (TK) and Technological Pedagogical Knowledge (TPK) involve the productive thinking in this scale. Thus, it is aimed to obtain real-like results. The participants were 756 teachers and and teacher


Introduction
Technological Pedagogical Content Knowledge (TPACK) is a teacher training approach based on Pedagogical Content Knowledge (PCK) by Shulman (1987Shulman ( , 1986 for teachers to produce effective teaching with instructional technologies. The understanding of TPACK was developed with a series of studies by Mishra and Koehler (2006) and Koehler and Mishra (2009). This approach contains three main teacher knowledge domain. These knowledges are content, pedagogic and technology (See also Figure 1). These knowledges can interact with each other such as Technological Pedagogical Knowledge (TPK), Pedagogical Content Knowledge (PCK), and Technological Content Knowledge (TCK). The components are presented in Figure 1. Figure 1. The TPACK framework and its knowledge components (Koehler & Mishra, 2009).
The summary of the components is as follows: Content Knowledge: To give specific information on a subject without pedagogical activities (Cox & Graham, 2009). Pedagogical Knowledge: "To focus on a teacher's knowledge of the general pedagogical activities that he/she might utilize" (Cox & Graham, 2009). Technological Knowledge: "It is defined as the knowledge of how to use emerging technologies" (Cox & Graham, 2009). Technological Content Knowledge: "Demonstrations of exemplar teaching/learning resources produced by using different software applications" (Hu & Fyfe, 2010). Pedagogical Content Knowledge: "To design tasks that require students to connect what they do in information and communication technology unit to what they have learned in their curriculum subject areas" (Hu & Fyfe, 2010). Technological Pedagogical Knowledge: "To design tasks in which students works in pairs, exploring the affordances of information and communication technology tools of their choice to address a particular teaching/learning need" (Hu & Fyfe, 2010). Technological Pedagogical Content Knowledge: "A teacher's knowledge of how to coordinate the use of subject-specific activities or topic-specific activities with topic-specific representations using emerging technologies to facilitate student learning" (Cox & Graham, 2009). In brief, Technological Knowledge (TK) and Technological Pedagogical Knowledge (TPK) are required for Technological Formation. Technological Content Knowledge (TCK) contains technology itself and its utilization for production purposes and Technological Pedagogical Knowledge (TPK) describes how to use technology in the teaching/learning process (Özden, 2012).
It can be said that teachers have different competences without TPACK as well as Content Knowledge to make an effective classroom environment based on the components. The tools in field of education such as online courses are being changed together with renewed technological developments (Bulman & Fairlie, 2016). As a result of these, there have been a series of changes such as the transition from blackboard to interactive boards (Adıgüzel, Gürbulak & Sarıçayır, 2011). Due to these kind of changes, teachers' technological proficiencies have become important (Voogt & McKenney, 2016). Countries update their educational policies to determine these proficiencies. Technology policies in education is being developed to catch rapidly emerging technologies, and to complete the necessities of the era (Tekin & Polat, 2014). There are important tasks that fall on teachers to apply technology in education as a result of these developments. In addition, school administrators should have these proficiencies such as supporting teachers' use of technology in lessons, helping them develop e-content and maintaining their personal development (Bakioğlu & Şentuna, 2001).
The implementation of technological developments in education in Turkey is done with the Turkey Informatics Councils, the Vision 2023 Strategy Paper, National Education Councils and the Increasing Opportunities Improving Technology Movement (FATİH) Project with these endeavours, studies are developed to integrate technology with Turkish educational institutions and educational system (Tekin & Polat, 2014). Many reforms have been applied such as using information technology tools in education, improvement of infrastructure, synchronous working, and to produce digital contents (Tekin & Polat, 2014). For this reason, not only the policies, proficiencies of the teachers and the managers, who work for educational institutions, also are important for these policies to apply and to process (Bakioğlu & Şentuna, 2001). The Ministry of National Education (MoNE) carries on periodic studies about determining teaching profession's proficiencies.
Basic policies related to education and profession of teaching of international organizations such as Council of Europe, the World Bank, ILO, OECD, UNESCO and UNICEF and proficiency documents of several countries like USA, Australia, Finland, France, Hong Kong, UK, Canada and Singapore were analysed to form these proficiencies. As a result of these studies, the General Competencies of Teaching Profession was published by Directorate of Teacher Training and Development, the MoNE in 2017. It contains three proficiency fields: "professional knowledge", "professional skill", "attitude and values" which are interrelated and mutually complementary, and 11 proficiencies under them and 65 indicators of these proficiencies. These proficiencies are named as follows: "subject area knowledge", "subject area teaching knowledge", "legislation knowledge", "education planning", "making learning environment", "managing teaching and learning process", "national, spiritual and universal values", "assessment and evaluation", "approach to a student", "communication and cooperation" and "personal and professional development" (MoNE, 2017). Besides all of these, in a study, which is related to in-service training requirements in the field of instructional technology, it is found that teachers mostly need in-service training in "Using Technology in Education", "Effective Use of Teaching Materials" and "Use of the Internet for Education" topics (Sarıtepeci, Durak & Seferoğlu, 2016).
In a study by McKnight et al. (2016), technology increases ease of access for both teachers and students, increases communication and feedbacks, leads to a restructuring of teachers' time and shows changes in student and teacher roles. Specially, blended learning and flipped learning with technology, provides access to more resources and broad opportunities for students. However, when the literature is examined, it is emphasized that teachers' technology skill levels may be an obstacle for technology integration (Carver, 2016). In the study conducted by Moradi-Rekabdarkolaei (2011) with 384 secondary school students and 367 teachers, as a result of evaluating the participants' critical thinking, problem solving and cognitive levels, it was found that students differ significantly in terms of access, management, integration, evaluation and producing content than teachers.
Considering these findings in the literature, it can be said that the integration of teachers and teacher candidates into technology will become more important for future education programs. Therefore, the Technological Knowledge component in the TPACK framework plays a key role for effective learning models, increasing the communication between teacher and students, efficient use of time, adaptation to the digital age, and more similar variables are effective. In the literature, scales for different dimensions of TPACK have been developed regarding these components (Timur & Taşar, 2011;Gökçek & Yılmaz, 2019;Usta & Karakuş, 2016;Hiçyılmaz, 2018, Özel, Timur, Timur & Bilen, 2013. When the scales with technological components are examined, the scales have been developed or adapted for a specific field (such as classroom teaching only) (Kaya, Kaya & Emre 2013;Hacıömeroğlu et al., 2014;Önal, 2016;Sarı & Bostancıoğlu, 2018).
Another common point of the examined scales is that they are for the use of education technologies. However, with the rapidly-developing technology, it is known that users are not only accessing information but also produce content. Even Web 2.0 technologies, which are an example of this, appear to be moving far ahead from Web 3.0 (Semantic Web) (Yağcı, 2011). In this context, technology in other words, information sources are not only an environment created by experts but have become platforms that develop and produce information with users. In a study conducted with teachers on the example of Web 2.0 technologies, it is stated that teachers feel better about using technology with Web 2.0 technologies and they see themselves differently from other teachers (Tatlı, İpek Akbulut & Altınışık, 2016). Platforms with Web 2.0 technologies appear to play a new role in transformation of teaching and learning (Alexander & Levine, 2008). Activities such as collaborating with Web 2.0 technologies, actively participating in content creation, generating information and sharing information online have been emerged (Grosseck, 2009). By means of the activities, it allowed students to become content producers rather than just listening to lectures. In the same time with Web 2.0 technologies, teachers are transformed into people who produce content to facilitate learning more than an information distributor (An & Williams, 2010). Thus, students are at the heart of the learning process (Palaigeorgiou & Grammatikopoulou, 2016). Researchers emphasizes that for Web 2.0 technologies to be effective, content must be produced by users who use it (Rahimi, van den Berg & Veen,2015;Al-Qallaf & Ridha,2018). In an environment where students produce content, it is inevitable for teachers to produce information. In an environment where Web 4.0 is on the agenda (Yağcı, 2011), teachers need to be involved in the production process for technologies like Web 2.0 to be effective (Okello-Obura & Ssekitto, 2015). Due to developing technology and increasing use of information and communication resources, the change of traditional teaching methods made it necessary for teachers to be productive individuals in order to remain effective in teaching processes (Thomas & Thomas, 2012).
Consequently, due to the above stated reasons, in the literature, no validated and reliable measurement tool has been found to measure the Technological Knowledge levels independently based on the competence of teachers and teacher candidates to use technology as a production tool. Accordingly, the aim of this research is to develop a valid and reliable Technological Formation Scale that measures the ability to use technology as a production tool.

Research Model
This research that uses descriptive survey model is a scale development study. In survey model, information is collected from a wide audience, using answer options determined by the researcher. Generally, in survey research, researchers are concerned with how opinions and characteristics are distributed in terms of individuals in the sample rather than why they originate (Fraenkel & Wallen, 2006).

Sample
The study group of the research consisted of 672 teachers who are working at Ministry of National Education and 84 teacher candidates who were study at a medium-sized university located in Black Sea Region, Turkey. The participants in the research were from different subject area such as Computer Education, Maths, Science, Classroom Teaching etc. The 24 random participants in the study group were selected again to test the stability of the scale. Descriptive explanation of the study group was showed in Table 1

Development Process of the Scale
In order to establish an item pool, first of all, literature review was conducted to reveal the meaning of technological formation (Cox & Graham, 2009;Hu & Fyfe, 2010). Then, literature was examined about the knowledge of technological formation of teachers (Bulman & Fairblie, 2016;Adıgüzel, et al., 2011;Voogt & McKenney, 2016;Tekin & Polat, 2014;Bakioğlu & Şentuna, 2001). In order to determine competencies, the basic policies of international organizations such as the Council of Europe, the World Bank, ILO, OECD, UNESCO and UNICEF regarding education and teaching and the proficiency documents of many different countries such as USA, Australia, Finland, France, Hong Kong, Great Britain, Canada and Singapore were examined (MoNE, 2017). A literature review was conducted to determine how information technologies and internet technologies should be used in educational environments (McKnight et al., 2016;Carver, 2016;Sarıtepeci et al., 2016). Finally, it was decided what kind of information the teachers should have in order to use information technologies and internet technologies. Accordingly, in order to form an item pool, the literature was examined, and the item pool was formed with the information obtained (Sarı & Bostancıoğlu, 2018;Önal, 2016;Okello-Obura & Ssekitto, 2015;Hacıömeroğlu, 2014;Thomas & Thomas, 2012;Yağcı, 2011;An & Williams, 2010;Grosseck, 2009;Alexander & Levine, 2008). In addition, considering the necessity of thinking skills in the use of technology in the production process, it was thought that teachers should have computational thinking skills in order to produce (Özden, 2015). Within this framework the computational thinking scale developed by Korkmaz, Çakır and Özden (2017) was examined and 15 items of this scale were considered appropriate to be taken into the item pool.
According to the information obtained from the literature review 71 items were included in the item pool. The scope validity of the item pool was rearranged by taking the opinions of 4 faculty members who are experts in the field of education in the from two different universities in order to control their appropriateness with the required properties and the measured properties. According to the feedback given by the experts, incorrect or difficult to understand statements were corrected and 3 items were removed from the item pool. As a result of the updates, the final item pool was emerged. Thus, a "Technological Formation Scale" trial form with 68 items was created. The scale, which was prepared as a 5-Likert type scale, was named as (1) Strongly Disagree, (2) Disagree, (3) Undecided, (4) Agree and (5) Strongly Agree.

Data Analysis
In the data analysis process first, the negative items 32., 41., 49. were calculated by reverse scoring during the analysis process. Then, the data were analysed with SPSS package program. Next, validity and reliability analyses were performed on the obtained data. KMO and Bartlett Test were used to determine the suitability of the scale for factor analysis. It was decided to carry out factor analysis because the values were found to be appropriate. Russell (2002) states that KMO value is above 0.90 is suitable for the factor analysis of the scale. Furthermore, according to Bartlett test results, it is said that H0 was rejected when the statistical significance value was at 0.05 level (Büyüköztürk, 2002;Eroğlu, 2008). Then, an exploratory factor analysis was conducted to determine the construct validity and factor structure of the scale. Factor analysis is a statistical technique that aims to explain the measurement with a small number of factors by combining variables that measure the same structure or quality (Büyüköztürk, 2006). While describing the construct validity, Büyüköztürk, Çakmak, Akgün, Karadeniz and Demirel (2016) emphasized how accurately the scale items can measure the concept being tried to be measured. The construct validity of the scale was tested with exploratory factor analysis, item discriminant analysis and item total correlation analysis.
Factor loads were calculated by using Varimax vertical rotation technique in order to divide the scale items into factors. The calculation of factor loads is the main criterion for determining and interpreting the factors (Balcı, 2009). If the correlation of each item of the factor with that factor is greater than ± 0.30, it is accepted that there is a significant relationship between that item and the factor it belongs to. (Turanlı, Cengiz & Bozkır, 2014). According to the Principal Component Analysis, the factor load was below 0.30 and the items distributed to multiple factors were identified and discarded. Finding the least number of factors that best represent the relationship between the items is the main purpose in factor determination (Kalaycı, 2006). 40% of the total variance of each factor is sufficient to find the appropriate number of factors (Büyüköztürk, 2002;Eroğlu, 2008).
After factor analysis, an independent sample t-test was performed to determine the distinctiveness of the items. In order to determine the validity of the scale, Pearson's' r test was used to determine item total correlations. Thus, it was found out to what extent each item supports the factor involved. In order to determine item discrimination, the total scores of each item were ordered from largest to smallest, then the upper and lower 27% groups were formed. The extent of differentiation between these groups was determined. Stability tests and internal consistency were used to determine the reliability of the scale. For internal consistency coefficient, Cronbach Alpha was considered. The reliability coefficient is 0.70 and above, indicating that the scale is reliable (Kartal & Dirlik, 2016). Scale reliability was also supported by Guttman Split-Half, Sperman-Brown tests and two paired semi-correlation formulas. Test-retest was used to analyse the stability of the scale. Therefore, scale items were applied to 24 participants at 4-week intervals and the correlation values between the two data groups were examined.

Data Collection
Online and printed forms were used to reach teachers who work all over the Turkey. In order to get more participants with the online form, information was shared in social groups created for teachers and prospective teachers via Facebook. The printed form was used to reach the participants in the institutions and the universities. The data obtained with printed forms were transferred to digital form.

Findings Regarding the Validity of the Scale
The scale' construct validity, item-factor correlations and item discrimination values were examined for the validity of the Technological Formation Scale. The findings are presented below.

Construct Validity (Exploratory Factor Analysis):
First, Kaiser-Meyer-Olkin (KMO) and Bartlett tests were carried out to determine the construct validity of the technological formation scale. As a result of the analysis, KMO = 0.968, Bartlett test was found to be χ2 = 39783.238, SD = 1485, (p = 0.000). Within the framework of these values, it was observed that the 68-item scale was suitable for factor analysis. Then, the scale factorization status was determined by Varimax vertical rotation technique. Varimax vertical rotation technique was carried out 4 times for factor analysis according to item load conditions and item distribution conditions. In the first analysis, a 4factor structure emerged and items 1, 2, 4, 32, 41 and 49 were removed from the item pool as they were distributed to more than one factor. As a result of the second factor analysis, a 4factor structure emerged. Since, item 5 was divided into multiple factors; the item was removed from the scale. In the third factor analysis, again, a 4-factor structure was obtained and items 15, 16, 39, 40 and 61 were removed from the scale as they were distributed to more than one factor. In the last factor analysis, item 16 was removed from the item pool and the scale was composed of 4 factors. After the Varimax vertical rotation technique, the factor loadings of the 4 factors scale were found to be between 0.810 and 0.543. In addition, 62.544% of the total variance of items and factors explained. The distribution of scale factors is shown in Figure 2.

Figure 2 Eigenvalues -Factors
When Figure 2 is examined, the slope flattens after the 4th point. This shows that the scale has 4 factors. As a result of these analyses to determine the factors and scale items, the findings regarding the item loads, eigenvalues and variance explanation of the 55 items in the scale are presented in Table 2. According to Table 2, the first factor of the scale consists of 30 items, the second factor consists of 7 items, the third factor consists of 12 and the fourth factor consists of 6 items. In the first factor, the factor loads ranges are between .785-.596 and have 22.490 eigenvalues in the scale and their contribution to total variance is 29.488. In the second factor, the factor loads ranges are in the range of .862-.670 and have 3.578 eigenvalues in the scale and its contribution to the total variance is 12.221. In the third factor, the factor loads ranges are in the range of .846-.529 and have 6.513 eigenvalues in the scale and its contribution to the total variance is 13.835. In the fourth factor, the factor loads ranges are in the range of .790-.502 and have 1.818 eigenvalues in the scale and contribute 6.999 to the total variance. The items of each factor were analysed separately and factor names were determined. Within this framework, the first two factors were evaluated under the title of Production and the first factor consisting of 30 items was named as "Content Development" and the second factor consisting of 7 items was named as "Interactive Object Development". The remaining two factors were evaluated under Productive Thinking and the third factor consisting of 12 items was named as "Problem Solving" and the fourth factor consisting of 6 items was named as "Creativity".

I19
.657** I37 .761** I22 .691** I20 .653** N=759; ** = p < .001, I=Item Table 3 shows that item-factor correlation values for the first factor are between .812 and .653; for the second factor are between .960 to .853; for the third factor are between .841-.604; and for the fourth factor are between .573 and .228. It is seen that each item has a significant and positive relationship with the total score of the factor to which it belongs (p<.001). According to these results, it can be said that each item serves the purpose of the factor to which it belongs.

Item Distinctiveness
In order to calculate the distinctiveness of the items, item scores were sorted from the largest to the smallest and upper and lower 27% groups were determined. Subsequently, independent sample t-test was applied to the 205 upper and lower group scores. Table 4 presents the t-values and statistical significance levels indicating the distinctiveness of the items. When Table 4 is analysed, it is seen that the values obtained as a result of independent sample t-test analysis for the total of 4 factors and 55 items vary between 34.145 and 3.839. The t value for the total sum of the scale was determined as 55.671. The values obtained were significant (p <.001). As a result of t test, it can be said that the discrimination of each item and overall scale is high.

Findings Regarding the Reliability of the Scale
The data obtained were analysed with internal consistency and stability analysis to regarding the reliability of the scale. Findings and analysis steps are presented below.

Internal Consistency Level
Internal consistency and stability tests were used to determine the reliability of the whole scale and the four factors. For internal consistency coefficient, Cronbach Alpha value was considered. The reliability coefficient of .70 and above indicates that the scale is reliable (Kartal & Dirlik, 2016). Scale reliability was also supported by Guttman Split-Half, Spearman-Brown tests and two paired semi-correlation formulas. Table 5 shows the analysis results of the scale and all factors. As it is shown in the

Constancy Level
Stability level of the scale was determined by test-retest method. For this purpose, the final 55-item scale was reapplied to the 24-person group after 4 weeks. In order to find the correlation values between the data groups obtained from the applications, both the whole scale and the related samples for each item were analysed using t-test method. The findings obtained from the analysis are given in Table 6. .607* I03 .729* I35 .712* I26 .627* I30 .730* I29 .650* I33 .867* I06 .866* I21 .727* I36 .715* I27 .572** I31 .804* I17 .585* F1 .854* I38 .720* F2 .875* I19 .559** F3 .829* I37 .629* F4 .507** I22 .670* Total .873* I20 .837* N=24; *p<.001; **p<.005 Table 6 shows that the correlation coefficients obtained by the test-retest method of each item in the scale ranged between .31 and .897, and each relationship was significant and positive. The correlation coefficients obtained by the test-retest method of the factors constituting the scale ranged from .507 to .875. The correlation of total score was .873 and each relationship was significant and positive. Accordingly, it can be said that the scale can make stable measurements.

Discussion and Conclusions
As a result of this study, a scale was developed to determine the technological formation attitudes of teachers and teacher candidates. Technological Formation Scale was prepared as a 5-Likert-type scale. The scale consists of 55 items and 4 factors. Each item in the scale has the options which are Strongly Agree (5), Agree (4), Undecided (3), Disagree (2) and Strongly Disagree (1). The validity of the scale was determined by factor analysis and discriminant test. The exploratory factor analysis revealed that the scale consisted of 4 factors. Researchers have been carried out in the literature on Production and Productive Thinking topics, which are listed in the top headings of the mentioned factors. In the light of these studies, concepts related to production and thinking about production were examined. Thinking is defined as a functional feature of the mind that separates man from other living beings (Doğan, 2011). Productive Thinking is divided into five different operations by Guilford (1956): divergent thinking, evaluative thinking, cognition, convergent thinking, and memory. Using these five different processes, it is called Productive Thinking that the ideas and knowledge in the past or present produce new ideas or solutions to issues. It has also been shown that Guilford's theory of Productive Thinking is useful in making more effective decisions in engineering (Brown & Katz 2009;NRC 2001). Hoffman & Hoffman (1964) emphasised that productive thinking can include problem solving, analytic and logic dimensions such as creative thinking. At the same time under the headline of Productive Thinking, creativity has different definitions in the literature (Rouqette, 1992;Torrance, 1968;Stewig & Vail, 1985;Turgut, 1993;Craft, 2003). According to Wegerif (2007), creativity is a fact that should be developed and maintained in the information age where information production environment exists. In this context, it can be said that Productive Thinking necessarily involves problem solving and creativity skills or facts.
The production title can be defined as the production of products after a number of processes. Technological materials used to embody the concepts or abstract concepts that are tried to be taught in education provide convenience (Gülen, 2010;Gülen & Demirkuş, 2014). Therefore, it can be said that it is important that teachers and teacher candidates produce concrete products with interactive objects as well as information and communication technologies. The importance of material development was emphasized in the FATİH Project (MoNE, 2013). Thus, considering the developments in the literature, items and factors were created. Some items in the study were obtained from the items under the thinking dimension in the Computational Thinking Scale (CTS) developed by Korkmaz, Çakır and Özden (2017). These items were included in the factors under the head of Productive Thinking. As a result of the analysis, it was seen that other skills were distributed to other factors under the headings of production. Considering these findings in the literature, it can be said that the Productive Thinking has problem solving and creativity processes, and the production has content creation with development materials.
To determine the construct validity of the scale, eigenvalues, factor loads of factor items, and variance amounts were calculated and the construct validity of the scale can be said to be appropriate in line with the results obtained. With item-factor correlation analysis, the correlation between each factor and the items of that factor were calculated and their contribution to the factor in which each item was included was determined. According to these results, it can be said that each item is compatible with the factor and scale in which it is included and has a significant contribution. For the calculation of the discrimination of items, the points of the items have been descending sort, set upper 27% and lower 27% groups, and independent sample t-test was applied to these lower and upper group scores. According to the result of the t-test, it was determined that the whole scale and the items had a high level of discrimination. To determine the reliability of the whole scale and the four factors identified, stability tests and internal consistency were used. The internal consistency coefficient was evaluated by Cronbach Alpha value. In addition, internal consistency coefficients were calculated by Guttman Split-Half, Spearman-Brown tests and two paired semi-correlation formulas. When the results of the analysis are examined, it is concluded that the whole scale and each factor can make consistent measurements.
In the literature, similar scale studies on this subject have been found in the literature. (Schmidt et al., 2009;Kaya & Dağ, 2013;Hacıömeroğlu, 2014;Timur & Taşar, 2011;Öztürk & Horzum, 2011;Horzum, 2011;Kuşkaya-Mumcu, 2011;Gökçek & Yılmaz, 2019;Lee & Tsai, 2010;Usta & Karakuş, 2016;Hiçyılmaz, 2018Hiçyılmaz, , Özel et al., 2013. When the scales are examined, it is determined that the scales are field dependent or related to utilization cases rather than production in technology. In this scale development study, to measure the ability of teachers and teacher candidates to produce content using educational technologies is aimed. The biggest factor that makes it necessary to measure the ability to produce content is the developments in technology and its reflections on educational technologies (Yağcı, 2011;Carver, 2016;An & Williams, 2010). Therefore, it is believed that this scale development study will contribute to technology studies, especially in education in the literature.
Factor analysis findings obtained as a result of the study show that Technological Formation Scale is a valid and reliable measurement tool. With the use of the TFS, it will be possible to examine the technological formation of teachers and prospective teachers in a way which includes the competence of producing content such as making a website with web 2.0 technologies. It is thought that the results of the scale will be able to use the appropriate technology for the purposes of education and training activities with both pre-service and inservice supports to teachers and candidate teachers.