Category: Children

Extract travel data

Extract travel data

Expert Syst Appl. All overlays Extrct transparent at first. Full size image. Article Google Scholar Vazifehdan M, Moattar MH, Jalali M.

Extract travel data -

The travel industry is ever changing with exa-bytes of data being updated on a daily sometimes hourly basis. For organizations operating is such a dynamic industry, access to accurate and structured data at the right time becomes of utmost importance.

Here is a use case of our client, where the requirement was to extract travel data daily. Read on to know more:. A well-known budget hotel chain who wants to be updated about the hotel prices across India twice a day, to be able to strategize better in terms of pricing as well improve their offerings in general.

With ever-growing competition, access to such data becomes a priority to stand out in the industry! Most of the major OTAs Online Travel Agents in India with a volume of around 30 million records per month.

Approximately , to , records per site per day were delivered based on the files uploaded by the client. Read More. Call Now.

marketing promptcloud. Home Get Travel Data Daily with Mas Get Travel Data Daily with Mass Scale Travel Feed Extraction. E-mail: sales promptcloud. Submit Requirement.

Read on to know more: Client Details A well-known budget hotel chain who wants to be updated about the hotel prices across India twice a day, to be able to strategize better in terms of pricing as well improve their offerings in general. Crawl requirements: Travel Data Daily Required The client had a specific set of requirements: Check-in Check out dates to be specified by them Such data was to be uploaded to a file sharing server and our crawlers were to pick them up at predefined times and process them.

Possible situations: File available in the morning but not in the afternoon File not available in the morning but available in the afternoon File available both in the morning and afternoon No file uploaded in the morning or afternoon Fields for extraction were predefined and in a set order as specified by the client Crawl frequency was twice in a day Target Sites and Approximate Monthly Volumes Most of the major OTAs Online Travel Agents in India with a volume of around 30 million records per month.

Related Use Cases. Use Case. Large-Scale Price Data Extraction From Hotel Booking Portals Read More. Use Case Web Scraping Hotel Prices On Daily Basis Read More.

Alternatively, traditional travel behavior analysis has focused on single data items and their specific elements [ 9 , 10 , 11 ]. Thus, the aforementioned studies have failed to simultaneously consider multiple data items in smart card data.

It is understood that the different attributes present in smart card data such as origin station, destination station, the time of the day, the day of the week, and passenger type affect each other.

However, consideration of specific data items extracted from smart card data impedes the exploration of effects wielded by the excluded data items on travel behaviors.

Moreover, the results may vary widely depending on the data items being analyzed. From the perspective of effective big data analysis, this study advocates the simultaneous analysis of as many data items as possible.

To this end, it studies patterns in travel behaviors of individuals based on smart card data collected from a provincial city in Japan.

Five data items henceforth referred to as attributes are used in the analysis—boarding day of the week henceforth referred to as day , boarding time of day henceforth, time , passenger type, origin station, and destination station.

Methodologies based on tensor decomposition have been proposed for the simultaneous consideration of multiple attributes [ 12 , 13 , 14 ]. Tensor decomposition can be an effective method to analyze data of the 3rd order or higher [ 15 ].

Moreover, it enables analysis without disturbing the original data structure [ 16 ]. A tensor representation allows the summarization of multivariate data in a multi-dimensional array.

The tensors of the lowest order are referred to by specific common names—a 0-order tensor is called a scalar, a 1st order tensor is called a vector, and a 2nd order tensor is called a matrix [ 14 , 17 ]. Tucker decomposition—a particular model of tensor decomposition—estimates the factor matrices that represent the characteristics of each attribute in high-order data [ 18 , 19 , 20 ].

The characteristics of each attribute are called factors. The number of factors are determined arbitrarily [ 19 ]. In addition, a core tensor representing the combination of factors corresponding to each attribute is estimated alongside the factor matrices [ 18 , 19 , 20 ].

Tucker decomposition reveals the interactions between attributes in the original data based on the estimated factor matrices and core tensors. However, tensor decompositions, such as factor analysis, exhibit greater complexity of results when the number of attributes is increased.

In addition, as the number of elements corresponding to each attribute is increased, the unique determination of the number of factors using tensor decomposition and interpretation of the components of the factors becomes progressively more difficult [ 12 ]. This complicates the understanding and interpretation of factor matrices and core tensors [ 12 ].

Finally, tensor decomposition is incapable of extracting the characteristics of elements based on a small number of samples. This study attempts to analyze multiple attributes simultaneously by constructing a graph.

This approach is different from those used in previous studies and is novel to this study. An increase in the number of combinations of attributes, i. It is difficult to grasp data characteristics from a complex graph. To address this problem, this study extracts groups of more relevant vertices from a graph based on the similarity between vertices.

Several pattern extraction methods using the similarity index have been proposed in the literature [ 21 , 22 , 23 ]. However, they are unsuitable for the extraction of patterns from graphs. Graphs are represented by two-dimensional tables, which are symmetric matrices with zero diagonal elements.

In this case, the combinations of column information for each row are different. This should be noted during the selection of the optimal pattern extraction method.

This study implements a data polishing approach to extract travel patterns from the graph. It clarifies group boundaries based on a hypothesis—two vertices have multiple common neighbors in a graph if both are included within a dense sub-graph of a certain size [ 24 ].

In data polishing, all vertex pairs possessing at least a certain number of common neighbors i. On the other hand, all edges whose endpoints do not satisfy the condition are deleted because they are not considered to lie within the same cluster [ 24 ].

The graph is progressively updated by repeating this operation. Data polishing modifies the input data, enabling the extraction of groups of related vertices without the loss of group structures in the data [ 24 ].

In addition, it enables the extraction of vertex groups without depending on the number of samples by focusing on the similarity of vertices instead. This advantage is particularly useful in this study as smart card data of elderly citizens or children may have small sample sizes.

Therefore, we adopt data polishing as the preferred method to extract the travel behaviors of such passengers. In addition, as data polishing is a soft clustering method, it enables multiple characteristics of each vertex to be captured.

This makes the extraction and analysis of travel patterns via this method particularly flexible. This study proposes an improved version of the existing data polishing method to analyze smart card data, which considers multiple attributes simultaneously. We extract the travel patterns of smart card users in terms of five attributes.

Then, groups of usage vertices with similar connections to OD vertices are extracted. In addition, the origin station and destination station combinations are clarified with the largest number of users for each usage group. Via this process, an understanding of the characteristic travel patterns is gathered in terms of the origin and destination stations of passengers on particular days of the week and at particular times of the day.

The remainder of this paper is organized as follows. The smart card data used in this study is introduced in " Data description and aggregate analysis " section.

The proposed data polishing-based method used to extract travel patterns is described in " Method " section. The results of the analysis are presented in " Results " section and discussed in " Discussion " section. Finally, the paper is concluded in " Conclusions " section.

This study uses smart card data collected in the Kagawa Prefecture in Japan Fig. Kagawa is located in the northeast part of Shikoku, one of the four main islands of Japan. It has a population of approximately , people as of October 1, Most of them depend on automobiles for transportation.

However, people who cannot drive automobiles, such as the elderly, rely on public transportation. This makes the maintenance of public transportation particularly important.

Improvement of transit services requires a thorough understanding of the travel behavior of current users. The smart card used in Kagawa is called IruCa. IruCa was introduced to be used on Kotoden trains or buses operated by the Takamatsu Kotohira Electric Railroad Company and on buses operated by other bus companies in the prefecture.

As of March , , IruCa cards had been issued. Of these, 75, were commuter passes and , were non-commuter passes. It is understood that most IruCa users are residents of the Kagawa Prefecture because IruCa can be used only within the prefecture.

Therefore, it is reasonable to expect that this study sheds light on the travel behaviors of the residents of the Kagawa Prefecture based on IruCa data. Although IruCa can be used on both trains and buses, this study only focuses on smart card data related to Kotoden trains.

The associated train network comprises 52 stations in total, including the Kotohira, Nagao, and Shido lines as depicted in Fig. Two stations, Takamatsu-Chikko and Kataharamachi, are connected to both the Kotohira and Nagao lines.

The Kawaramachi station is connected to all three lines, enabling Kotoden passengers to transfer to any line at this station. This study focuses on five attributes related to each trip day, time, passenger type, origin station, and destination station collected within the smart card data as depicted in Table 1.

Passengers of types 1 — 5 are users of non-commuter passes. On the other hand, passengers of types 6 — 11 are commuter pass users. Smart card data collected over a period of 15 months, ranging from December 1, to February 28, , is used. and 12 p. on any of the three lines, and 2 it took at least 60 s to move between the origin and destination stations.

Figures 3 , 4 and 5 depict the average number of daily users, sorted by day, time, and passenger type, respectively. These reveal the usage patterns of the Kotoden train network. From the results sorted by the days of the week presented in Fig.

The average number of users on weekdays is 24,, which is approximately 2. This is consistent with the large number of weekday trips that are taken for commuting to schools and offices. From the results of usage sorted by the times of the day presented in Fig. The average number of users steadily increases between and hrs, which is understood to be accounted for by commuters to work and school.

The average number of users then decreases between and hrs, and thereafter, the average daily number of users remains approximately constant until hrs. After hrs, the average daily number of users exhibits another increment, and is particularly high between and hrs. This is understood to be accounted for by people returning home after hrs.

The results of usage sorted by passenger type, as presented in Fig. By contrast, it is revealed that the average number of child commuters to work and persons with disabilities commuting to work or school are as small as 0.

All graphs denoted in this paper are undirected graphs. Although cliques are usually in terms of a subgraph, we adopt the aforementioned definition in his paper, following Uno et al.

A clique that is not completely included in any other distinct clique is called a maximal clique. This section explains a new methodology for the extraction of travel patterns from smart card data based on the method of data polishing.

We define travel patterns in terms of combinations of five attributes—day 8 categories , time 20 categories , passenger type 11 categories , origin station 52 categories , and destination station 52 categories.

These five attributes are considered simultaneously to analyze the types of people who move between particular origin and destination stations at different times of the day on different days of the week. Uno et al. This study introduces additional steps to the aforementioned clustering method proposed by Uno et al.

and proposes a corresponding method for the analysis of smart card data. In addition, we do not focus solely on maximal cliques, but on all cliques Please consult Enumeration of Cliques Sect. The procedure for extracting travel patterns proposed in this study comprises five steps— 1 construction of the co-occurrence graph, 2 construction of the similarity graph, 3 application of data polishing to the similarity graph, 4 enumeration of cliques, and 5 extraction of combinations of origin and destination stations related to each clique.

Each step is explained below in order by using conceptual figures. In this study, an iMac CPU: Intel Core i7, Memory: 32 GB was used to execute all the aforementioned steps.

The total execution time was approximately 30 minutes using programs written in the R language. The graph depicted in Fig.

However, the diagonal component of the matrix is 0. The sum of the row corresponds to an OD vertex, the sum of each column corresponds to a usage vertex, and each element in the matrix corresponds to edge information.

For example, in Fig. In this case, co-occurrence is expressed by the ratio of common users among the users corresponding to each pair of usage and OD vertices. Further, a statistical test is performed to determine the significance of co-occurrence to rule out the possibility that its manifestation is coincidental instead of causal.

In this paper, t-values are used as the criteria for co-occurrence in the natural language processing field, and the statistical significance of co-occurrence is adjudged by a t -test. This study considers co-occurrence to be significant if the absolute value of the t -value is equal to or greater than 1.

In this study, we focus on the determination of similar connections between usage vertices and OD vertices and construct a similarity graph with high-similarity usage vertices.

Although Simpson and Dice coefficients can be used as similarity measures, this study uses the Jaccard coefficient following Uno et al.

The construction of the similarity graph from the co-occurrence graph is illustrated using an example presented in Fig. Following the same procedure, the similarities between all pairs of usage vertices are calculated, and a new graph is constructed using the usage vertices as vertices.

Edges are only inserted between those pairs of usage vertices whose similarities exceed a predefined threshold. In the example depicted in Fig. The construction of the similarity graph dictates that usage vertices that exhibit similar connections with OD vertices are connected by edges in the similarity graph, i.

The similarity measure of sets is used to adjudge whether usage vertex pairs share a strong connection. In this study, the Jaccard coefficient is used as the similarity measure as in the case of the construction of the similarity graph.

We illustrate the application of the data polishing procedure using the similarity graph presented in Fig. First, the similarities between all pairs of usage vertices constituting the similarity graph are calculated using 3.

By contrast, edges between pairs of vertices whose similarities are less than the threshold are deleted. Normally, data polishing is required to be applied several times to achieve convergence. However, in this example, the shape of the graph does not change from that of Fig.

Therefore, data polishing is terminated after only one application in this case. This example requires only one polishing because of the simplicity of the co-occurrence graph. This step involves the enumeration of all cliques in the polishing graph.

In the illustrative polishing graph depicted in Fig. In this study, we contend that various travel patterns of IruCa users can be understood by enumerating all cliques, including maximal ones. To this end, we extract all cliques, in contrast to the approach undertaken by Uno et al.

This reveals groups of users who exhibit similar travel behaviors. By contrast, users in maximal cliques exhibit unique travel patterns. Next, we attempt to estimate the most frequent origin and destination stations corresponding to users in each extracted clique.

Via this process, we identify the types of passengers who travel between different sets of origin and destination stations at different times of the day on different days of the week.

Therefore, only one parameter needs to be defined to extract the cliques. In previous studies [ 24 ], the threshold was set arbitrarily. This approach is unique to this study. The clustering coefficient is a measure of the degree to which nodes in a graph tend to cluster together.

It is higher in a graph in which vertices adjacent to each other are connected by edges. In our case, as all the vertices in a clique are connected by edges, it can be argued that the clustering coefficients of the vertices in the polishing graph increase on average when the clique is generated.

The average clustering coefficient is the average of the clustering coefficients of all vertices in a graph. It is calculated using 5 , where N denotes the total number of vertices. In this study, we focus on the relationship between the average clustering coefficient and similarity.

The average clustering coefficient corresponding to each threshold is depicted in Fig. Although the decision was made to set the threshold to be the maximum of the average clustering coefficients, there are multiple maxima for the average clustering coefficients. Therefore, it is not possible to unambiguously select the threshold solely based on average clustering coefficients.

To address this problem, the graph density was also calculated. The graph density is observed to increase as the number of edges approaches the maximal number of edges. In other words, a dense graph exhibits high graph density. However, the boundary of the vertex groups cliques cannot be determined when the graph is dense.

Therefore, this study utilizes the threshold that corresponds to the lowest graph density. By using two indices, it is possible to extract cliques such that the boundaries between them are clear.

The graph density corresponding to each threshold is presented in Fig. In this study, the threshold was set at the point where the average clustering coefficient was at a maximum and the graph density was at a minimum.

As is evident from the results presented in Fig. By applying the proposed method to the smart card data, the polishing graph depicted in Fig.

To simplify the references to each usage vertex, the numbers assigned to them have been indicated in Fig. The total number of extracted cliques in the polishing graph is observed to be Of these, 52 cliques consist of two usage vertices, 32 of three usage vertices, 5 of four usage vertices, 30 of five usage vertices, 3 of six usage vertices, and 1 each of nine, ten, fourteen, sixteen, and twenty-one usage vertices.

It is not possible to explicitly depict the compositions of all extracted cliques for the want of space. Instead, we highlight the differences between different combinations of day, time, and passenger type.

The total number of cliques in terms of different combinations are depicted in Tables 2 , 3 , 4 , 5 , 6 , 7 , 8 , 9 , 10 and 11 , each of which presents the results for the cliques with a given number of usage vertices as listed in the previous paragraph. For example, Table 3 presents the results for cliques consisting of three usage vertices.

Among the 32 extracted cliques, the number of combinations in which time and passenger type are the same is 28; in the other 4 cliques, only the passenger type is identical. From the data presented in Tables 2 , 3 , 4 , 5 , 6 , 7 , 8 , 9 , 10 and 11 , it is apparent that many of the extracted cliques contain combinations in which time and passenger type are identical.

These cliques represent behaviors of users of the same type at the same time but on different days of the week. For the cliques with six, nine, ten, fourteen, sixteen, and twenty-one usage vertices, this is the only combination observed. This can be interpreted to reflect the behaviors of the same type of users at different times of the day on different days of the week.

Combinations in which day and passenger type are identical or in which day, passenger type, and time are all distinct exist only in cliques with two usage vertices.

Further, the extracted cliques are compared in terms of passenger type. Cliques with two, three, four, and five usage vertices are observed to be related to various passenger types, as depicted in Fig. Thus, it can be concluded that several travel patterns of different passenger types can be surmised based on the results of cliques comprising small numbers of usage vertices.

On the other hand, travel patterns of three specific passenger types can be effectively assumed from cliques comprising large numbers of usage vertices, as all cliques with six or more usage vertices are observed to be related to adult or child commuters.

In this section, we present the characteristic combinations of origin and destination stations corresponding to each clique based on the results of the extracted cliques.

This corresponds to step 5 of the aforementioned procedure. Due to the large number of cliques, it is impossible to explicitly present the combinations of origin and destination stations corresponding to all cliques. The number of extracted cliques associated to child commuters is 35, that associated to the elderly is 2, and that associated to people with disabilities is Although most of these cliques consist of identical passenger types, three cliques comprise different passenger types.

Henceforth, we focus on these three cliques and clarify the origin and destination station combinations corresponding to the largest number of users in each. The details of the three cliques are as follows:. The origin and destination station combinations corresponding to the largest number of users in each of the aforementioned cliques are depicted in Table Thus, it is likely that the users in C2 also transfer at Kawaramachi.

In other words, the characteristic travel patterns can be discovered using the proposed method. However, it should be noted that the origin and destination stations thus identified need not correspond to the actual origin and destination of any individual user.

Although this study attempts to estimate the actual origins and destinations, this is not possible from the information encoded within the cliques.

We intend to clarify these issues in future research. Based on the obtained results, several of the extracted cliques were observed to be composed of identical passenger types.

However, it was suggested that the similarity between different passenger types could be quantitatively clarified based on cliques containing different combinations of day, time, and passenger type.

The prediction of such usage vertex combinations in advance may be difficult. This combination cannot be identified via basic analysis, such as aggregate analysis.

Even though one existing additional method can be used to identify similar combinations of day and time of travel corresponding to each passenger type, it is not practicable because the number of combinations become significantly high for efficient processing with the increase in the number of considered categories.

This study also provided an understanding of the travel patterns of passengers of different types at different times of the day and on different days of the week. Characteristic travel behaviors of smart card users and origin station and destination station combinations could be successfully extracted for each clique.

The travel patterns based on the extracted cliques provide conclusions about the behaviors of adult commuters and students traveling in the morning and returning home in the afternoon. These patterns are intuitive, which corroborate the success of pattern extraction performed. Moreover, the performance of the proposed method is not dependent on the number of samples, as approximately half of all cliques were observed to be related to small groups of smart card users, such as children, the elderly, and people with disabilities.

With respect to the promotion of public transportation in society, it is important to enhance the utilization frequency of not only users who avail it regularly but also those who avail it sporadically. However, it is difficult to identify their travel patterns via data analysis because of the low number of associated samples.

This study demonstrates that data polishing is effective even in case of users who avail public transport sporadically, and the distribution of the number of samples is biased.

Web scraping rata used to aggregate Daga, hotel, and airline data from Extract travel data web sources using a web scraping tool or Extract travel data scraping API. Vata travel Renewable Energy Alternatives automates and improves EExtract efficiency of many tasks in the travel and hospitality industries. However, companies face challenges in achieving desired business outcomes due to a lack of knowledge about how to benefit from scraped travel data. Assume you want to make a hotel reservation, and if you do not have a favorite hotel, you will search for hotels that are most suitable for you using an online travel agency such as tripadvisor, Booking. com, etc.

Video

LATEST GOOGLE MAP DATA EXTRACTOR - BEST FREE DATA SCRAPING TOOL - DATA MINER Data trxvel of tourist places can help Extract travel data enthusiasts find dats destinations. It also supports travel agencies and travel listing companies in curating unique Electrolyte balance disorders and aggregating Extract travel data information. Exttact, instead, can extract tourist places data to study travel trends and patterns. Web scraping is the most productive and fastest way to gather data from various sources. You can automate and streamline travel data search and extraction by learning the correct data scraping techniques or using a dedicated web scraping service. Are you wondering what exactly tourist places web scraping is all about?

Extract travel data -

Web scraping bots help travel agencies in obtaining real-time data from multiple web sources or specific web pages.

Product price and stock information are examples of data that are constantly changing. Assume you want to track and observe hotel room prices in a specific area for a month.

A web scraping bot will extract room pricing data on the first of the month. When the price of rooms changes on the website from which you extract data, your scraper also updates the price data.

Thus, you will receive the most recent price data. The web scraper will make numerous connection requests to the website to update the price. If you make a connection request with the same IP address, the website will identify your web scraper and block it to prevent data scraping.

You can use a proxy server to scrape web data to avoid being detected and blocked by the website. Check out our in-depth guide to proxy server types to decide which proxy server is best for you.

A quick tip: Residential and Internet Service Provider ISP proxy servers are excellent choices for large-scale web scraping projects. Because you must make multiple requests, IP anonymity and privacy are critical to avoid being identified as a scraping bot. If, on the other hand, your scraping project must be completed as soon as possible.

Datacenter proxies are the best option for completing tasks quickly. It aggregates hotel, travel and airline data from different data sources, such as hotel listings, reviews, ticket prices, location data, customer data, social media trends, room and flight ticket availability see Figure 2.

The airline industry implements various ticket pricing such as economy, premium economy, business class, etc. Ticket prices change according to:. Web scraping is used by airlines for:. Assume you want to get the most recent flight price information.

Suppose you are in the travel and tourism industry. In that case, your competitors are most likely using social media channels effectively for campaigns, brand building, and other purposes to improve their online presence. Web scraping bots:. The search result will allow your business to identify its top competitors who rank in your target keywords.

You can check their websites and click on social media accounts on the website. Web scraping bots helps businesses collect publicly available hotel data from different social media platforms for trend analyzing. The following information can be collected from social media platforms:.

This makes search engine maps data critical for businesses. Results can also be sorted by rating. Google Maps data is an excellent resource for connecting customers with businesses.

Web scraping bots collect information about real estate, hotels, restaurants, vacation rentals, etc. If you believe your company could benefit from a web scraping solution, look through our list of web crawlers to find the best vendor for you. Cem has been the principal analyst at AIMultiple since Cem's work has been cited by leading global publications including Business Insider , Forbes, Washington Post , global firms like Deloitte , HPE, NGOs like World Economic Forum and supranational organizations like European Commission.

You can see more reputable companies and media that referenced AIMultiple. Which is Travel Apps List We Can Scrape.

Here we will show you some of the various E-Scooter Apps:. Agoda Expedia TripAdvisor Hopper Tripcase Kayak. MakeMyTrip Car2go TripIt LonelyPlanet GoogleTrips Fliggy. Meituan Naver Traveloka Trip Grab Hotel Tonight.

Booking Uber Priceline Travelocity LateRooms HotelQuickly. Why Choose Us? Unique Services. Different Tools on Offer. Useful Data. Excellent Customer Support. Our Featured Works.

Scrape Data from Hotel Booking Sites. View More. Build a Crawler to Crawl Indeed, Monster, and Glassdoor. Paris, Île-de-France, France. Oxnard, California, United States.

Looking to Scrape Data from Coupons and Offers from Coupons Websites. Dallas, Texas, United States. Scrape Data from US Auto Parts Websites Like Oreillyauto, Auto Zone, etc. New York, United States.

Scrape Employee Reviews and Salaries Data from Careerbliss. Columbus, Ohio, United States. Delhi, India. Looking to Scrape Google Shopping US. Nashville, Tennessee, United States. Scrape Gardening Items from Amazon.

Chesapeake, VA, USA. Business Directory Data Scraping. Scrape E-Commerce Data. San Francisco. Scraping Member Directory Details from Android and iOS App. Azul, Buenos Aires, Argentina. makeFlex div p' ]; Copy. waitForSelector '. click '.

scrollBy 0 , } ; await page. textContent; console. stringify results ; } ; return JSON. parse items ; }; Copy. await page. overlayCrossIcon' ; await page.

overlayCrossIcon' ; Copy. waitForTimeout ; Copy. stringify results ; } ; Copy. return JSON. Share this article. Ready to try benefits of Browserless?

Extracy your project size is, we will handle it Electrolyte balance disorders with Anti-cancer integrative medicine the standards Extract travel data The online Exrract industry operates in a highly competitive environment Extrract constant price fluctuations. EExtract competitive is paramount for businesses in this sector, such as hotels, tour operators, travel aggregators, online travel agencies OTAsairlines, rail companies, car rental agencies, and travel technology solution providers. Competition within the online travel industry is relentless, making it crucial for businesses to gain a competitive edge by leveraging real-time market data. This data provides insights into the market, competitors, and customer behaviors. Extract travel data

Author: Daimi

0 thoughts on “Extract travel data

Leave a comment

Yours email will be published. Important fields a marked *

Design by ThemesDNA.com