Auburndale girls second, Columbus boys third at Edgar Cross Country Invitational

first_imgJewell, Karl earn top-10 finishes for ApachesBy Paul LeckerSports ReporterRIB MOUNTAIN — The Auburndale girls finished second, and the Marshfield Columbus Catholic boys took third at the Edgar Cross Country Invitational on Tuesday at Nine Mile Forest Recreation Area.Isabella Jewell took fourth in 20:53.7, and Kali Karl was eighth in 21:30.9 for the Auburndale girls, who totaled 66 points, 13 behind meet champion Medford in the girls team standings.Columbus Catholic was ninth, with Melanie Lang leading the way with an 18th-place finish in 22:09.1.Marissa Ellenbecker of Edgar won the girls race in 19:49.5, 44 seconds ahead of Medford’s Franny Seidel.Medford also won the boys team title with 37 points and three finishers in the top five. Trey Ulrich of Medford won the race in 16:56.5.Joshua Guyer took 10th in 18:00.8 and Jeremiah Giles was 22nd in 18:58.9 for the Columbus boys, who finished third, two points behind second-place Mosinee.Carver Empey led the Auburndale boys by taking 26th place (19:17.8) as the Apaches finished ninth in the team standings.(Hub City Times Sports Reporter Paul Lecker is also the publisher of MarshfieldAreaSports.com.)Edgar Cross Country InvitationalSept. 27, at Nine Mile Forest Recreation AreaVarsityGirlsTeam scores: 1. Medford 53; 2. Auburndale 66; 3. Mosinee 83; 4. Edgar 119; 5. Neillsville 183; 6. Westfield 183; 7. Loyal 192; 8. Three Lakes 192; 9. Marshfield Columbus Catholic 193; 10. Crandon 194; Owen-Withee and Pittsville incomplete.Top 10, and Auburndale and Marshfield Columbus Catholic finishers: 1. Marissa Ellenbecker (ED) 19:49.5; 2. Franny Seidel (MED) 20:33.8; 3. Kortnie Volk (TL) 20:49.5; 4. Isabella Jewell (AUB) 20:53.7; 5. Hannah Brewster (ED) 20:56.1; 6. Baileigh Johnson (NE) 21:05.5; 7. Mikaya Flyte (WEST) 21:05.9; 8. Kali Karl (AUB) 21:30.9; 9. Lauren Meyer (MED) 21:44.5; 10. Iris Schira (MOS) 21:51.4; 14. Vanessa Mitchell (AUB) 21:57.1; 15. Anna Kollross (AUB) 21:59.3; 18. Melanie Lang (MCC) 22:09.1; 26. Hailey Roehl (MCC) 23:08.1; 28. Taylor Stanton (AUB) 23:17.2; 29. Julianna Kollross (AUB) 23:18.3; 42. Marissa Immerfall (MCC) 24:15.5; 51. Morgan Albrecht (MCC) 25:09.7; 56. Emmalee Richardson (AUB) 25:38.5; 57. Amanda Momont (AUB) 25:39.2; 76. Greta Schiferl (MCC) 33:24.1.—BoysTeam scores: 1. Medford 37; 2. Mosinee 107; 3. Marshfield Columbus Catholic 109; 4. Crandon 112; 5. Westfield 128; 6. Tomahawk 147; 7. Edgar 175; 8. Owen-Withee 199; 9. Auburndale 214; 10. Neillsville 237; 11. Loyal 278; Pittsville and Three Lakes incomplete.Top 10, and Auburndale and Marshfield Columbus Catholic finishers: 1. Trey Ulrich (MED) 16:56.5; 2. Payton Cummings (WEST) 17:21.1; 3. Tanner Moris (CRA) 17:24.1; 4. Derek Rudolph (MED) 17:28.2; 5. Ray Zirngible (MED) 17:30.6; 6. Grayson Barrett (MOS) 17:37.3; 7. Nick Koller (ED) 17:54.6; 8. Elliot Genteman (LOY) 17:57.7; 9. Ryley Frozene (WEST) 17:57.8; 10. Joshua Guyer (MCC) 18:00.8; 22. Jeremiah Giles (MCC) 18:58.9; 26. Carver Empey (AUB) 19:17.8; 30. Bryce Fuerlinger (MCC) 19:22.2; 37. Peyton Nystrom (MCC) 19:42.6; 42. Paul Kollross (AUB) 19:54.8; 43. Leonard Steinert (MCC) 19:56.9; 48. Paul Peplinski (AUB) 20:35.9; 49. Matthew Leick (AUB) 20:42.2; 50. David Nielsen (MCC) 20:50.1; 53. Gage Stoflet (AUB) 21:13.7; 55. Darren Kieffer (AUB) 21:21.7; 61. Josh Peplinski (AUB) 22:17.1; 64. Ian Lang (AUB) 22:38.6.last_img read more

admin

Winter is Coming for Game of Thrones – and Federal Talent Management

first_imgTo read the original post on NGA Net, please click here. “Winter is Coming” is a key theme of the popular HBO series Game of Thrones. With its warning of constant vigilance, the meaning is clear – no matter how good or calm things seem now, the good times and serenity won’t last forever…and you need to prepare and be proactive to ensure you’re ready for when the tide turns.While talk of the long, dark winter in Game of Thrones centers around the inevitable attacks of the White Walkers and their ability to conquer the Seven Kingdoms if not unchecked, he could easily have been speaking of the current federal talent management environment for many agencies.With emerging and growing threats and challenges, if changes are not made soon, winter will surely come for these agencies. Faced with the retirement tsunami, millennial hiring challenges, a leadership and engagement crisis and more, the weight of legacy federal talent acquisition systems, performance management systems is clearly holding back innovation and progress.Many agencies are encumbered with legacy systems that carry with them numerous problems, but as the list below highlights, there are remedies from modern systems. These systems have been specifically designed with new architecture and a new mindset that helps agencies evolve, innovate and most importantly, address the existing and looming challenges.Lack of flexibilityIf an agency decides it wants or needs to alter its workflow and processes, whether in the talent acquisition, onboarding or performance and development stages of its talent management program, the changes are typically only possible with a major system change that is time-consuming, resource intensive and, of course, expensive to implement.Modern system fix: Complete flexibility throughout the system that allows minor or major workflow and process changes without changes to the talent management system.Lack of configurabilityTraditional legacy federal talent management systems do not provide the ability to configure the system according to an agency’s unique and ever-changing requirements and parameters. This results in a stagnate system that is unable or difficult to meet an agency’s human capital needs. Modern system fix: Configurable options that provide agencies with the ability to modify parameters virtually anywhere in the system. This might also include automating tasks and altering configurations along the way to assess changes made.Lack of analyticsWithout data, it’s difficult or impossible to have a real sense of what is working, what is not, and where improvements can be made. As Yogi Berra once said, “if you don’t know where you’re going, you’ll end up someplace else.” Most legacy federal talent management systems provide little or no analytical insight, leaving agencies running blind and with no rationale for decisions made.Modern system fix: Today’s systems capture data from all aspects of an agency’s workflow, including recruitment, onboarding, performance, succession and more. With detailed information available, easy-to-use reports can be created by non-data scientists to provide executive leadership and hiring manager with insightful information that results in faster, data-driven workforce decisions that help agencies achieve their missions. An “All or Nothing” ApproachWithout an ability to implement small changes and achieve minor, smaller wins, it’s very difficult for agencies to stay on track as they look to implement change within their organizations. Legacy systems are limited by an inability to allow tweaks for testing different theories and hypothesis, which not only severely limits changes, but lengthens the time period for any change to occur.Modern system fix: Because modern systems allow fast and easy configuration changes, agencies can easily take incremental yet significant steps on their journey toward a full integrated talent management system. They don’t need to plan with the full requirements in mind at the beginning. Processes can be tweaked and data observed to easily spot where the most beneficial changes can be implemented.High costThe dollar hit from legacy federal talent management systems begins early on and never abates. With a steep upfront cost, they typically carry a heavy annual fee as well. But even more importantly, the burden is on federal agencies to support the system from an infrastructure standpoint, meaning big expenses for hardware, security, ongoing maintenance and more.Modern system fix: Modern cloud-based systems lower costs to agencies significantly by removing infrastructure costs, including hardware and IT personnel costs. Furthermore, because of the inherent flexibility, systems become implemented much more quickly and at much lower costs. In addition, concerns such as security and reliability are transferred from the agency to the service provider. All these advantages provide a strong, clear Return On Investment (ROI).These heavy burdens hold back agencies from innovating and prevent them from implementing changes that are necessary to address the new landscape. With legacy systems in place and no plan to move away from them, federal agencies face a long, dark winter of discontent in dealing with the realities of human capital management in the 21st century.A move to lower-cost modern, flexible talent management systems opens a new world to agencies tackling today’s toughest talent management issues — and a bright, hopeful future.last_img read more

admin

Breaking News in the Industry: November 20, 2018

first_imgIdentity fraudster faces million dollar fineA Carmichael, California, man was charged with a series of identity fraud-related offenses along with felony firearm possession after collecting stolen mail and personal identification information to make illegal purchases.Manuel Campos Rodriguez, 41, was charged with 10 counts in all, including bank fraud, aggravated identity theft, possession of credit and debit card-making equipment, possession of stolen mail, and unlawfully possessing 15 or more credit or debit cards, according to a news release issued by the US Attorney’s Office for the Eastern District of California.Court documents allege that Rodriguez stole mail and collected credit and debit cards, account numbers, social security numbers, and driver’s license numbers, which he used to make purchases at stores such as Home Depot and Macy’s. Rodriguez faces up to 30 years in prison and a $1 million fine. He is being held without bail at the Sacramento County jail.  [Source: The Sacramento Bee]Feds indict blowtorch burglarsTwo men accused of using blow torches to break into Target stores across New England will be arraigned on federal charges. Investigators say the pair stole nearly $200,000 worth of iPads and iPhones in just two months. An indictment filed in US District Court in Worcester accuses Elijah Aiken and a second unnamed suspect of stealing electronics from Target stores. Investigators say the suspects used a portable blowtorch to cut through the metal doors and got inside the stores including one in Easton, Massachusetts.- Sponsor – The first theft happened in Pennsylvania in December 2014, where the pair took 15 iPads. Then over the course of two months, police say the same two men hit stores from Connecticut to New Hampshire.Investigators say they always took iPads and iPhones, and they left behind a hole in the metal doors.The biggest losses were in Massachusetts. About $154,000 worth of merchandise was stolen from the stores in Westborough and Easton. investigators say the pair would sell the goods to a buyer “in and around” New York City. Aiken was arrested at the scene of the last burglary in Connecticut. He was hiding in the snow. He was sentenced to two years in prison for that crime, but is now facing federal charges of transporting stolen goods over state lines.   [Source: Boston25 News]NRF says ORC at all-time highOrganized retail crime is continuing to grow, with nearly three-quarters of retailers surveyed reporting an increase in the past year, according to the 14th annual ORC study released today by the National Retail Federation (NRF). “Retailers continue to deal with increasing challenges and complications surrounding organized retail crime,” NRF Vice President of Loss Prevention Bob Moraca said. “These criminals find new ways to expand their networks and manipulate the retail supply chain every day. The retail industry is fighting this battle by upgrading technology, improving relationships with local law enforcement and taking steps such as tightening return policies, but it is a never-ending battle.”The report found that 92 percent of companies surveyed had been a victim of ORC in the past year and that 71 percent said ORC incidents were increasing. Losses averaged $777,877 per $1 billion in sales, up 7 percent from last year’s previous record of $726,351. Retailers attributed the increase to the easy online sale of stolen goods, gift card fraud, shortage of staff in stores and demand for certain brand name items or specific products. In addition, a number of states have increased the threshold for a theft to be considered a felony, meaning criminals can steal a larger quantity of goods while keeping the crime a misdemeanor and avoiding the risk of higher penalties that come with the commission of a felony.ORC typically targets items that can be easily stolen, and quickly resold, and top items range from low-cost products like laundry detergent, razors, deodorant, infant formula and blue jeans to high-end goods like designer clothing and handbags, expensive liquor and cellphones. Stolen goods are recovered anywhere from flea markets and pawnshops to online, with gift cards often ending up on online gift card exchanges. While online fencing has increased over the years, retailers say 60 percent of recovered merchandise, on average, is found at physical locations. While at least 34 states have ORC laws, 73 percent of retailers surveyed support the creation of a federal ORC law, noting that ORC gangs often operate across state lines.   [Source: BusinessWire]Task force goes after credit card skimmers(The following article was written by Brevard Sheriff Wayne Ivey) 42 law enforcement officers took a very proactive approach to protecting our citizens from credit card fraud and identity theft in preparation for the Holiday Season. As part of the initiative, Agents from the Brevard County Sheriff’s Office Economic Crimes Task Force, United States Secret Service, Department of Agriculture, and FDLE physically examined 251 Gas Stations throughout Brevard County in search of illegal credit card skimmers that had been covertly installed in gas pumps.Assisted by Detectives from Cocoa Beach, Satellite Beach, Cocoa, West Melbourne, Melbourne, Palm Bay, Titusville and the Rockledge Police Departments the operation led to the seizure of 18 skimmers that had been installed inside the pumps to target unsuspecting citizens.Electronic Skimmers capture the credit card data when the credit card is used to purchase fuel, using the “pay at the pump” features offered by most Gas Stations today. The device is covertly installed inside the pump by criminals who illegally gain access to the inside of the fuel pump and later return to collect the in-line slimmer and stolen data.   [Source: Space Coast Daily]Fleeing shoplifter injures police officerAn officer was injured by a shoplifting suspect who was attempting to escape arrest in a New Orleans East shopping center Sunday afternoon, according to New Orleans police. Ishionte Jachson, 23, was handcuffed after being accused of shoplifting in the 9600 block of Chef Menteur Highway, NOPD said, but managed to get loose.As an officer attempted to stop Jachson from escaping the area, she was knocked to the ground, according to NOPD spokesman Juan Barnes. The injured officer was brought to the hospital with a head injury, Barnes said. Jachson was apprehended by another officer. Her booking photo was not immediately available. Barnes said additional charges are pending.   [Source: Fox8 News]Which retailer calls PD 9 times a day and who pays?Police come to arrest the person accused of stealing a $2 ChapStick and investigate the theft of $10 sunglasses. They’re asked to settle domestic spats, break up parking lot disputes and remove disorderly drunks. These calls to police, thousands of which are made each year, chew up hours of the Columbia, South Carolina, Police Department’s time. And they all start at Walmart.Four Walmart locations rely on Columbia police more than any other establishment in the city, according to The State’s review of CPD crime data from 2014 to present. The big-box retailer generated far more calls to police compared to much larger shopping centers such as Columbiana Centre, which is home to more than 100 stores, and other comparable retailers like Target.Last year alone, Columbia police responded to a Walmart, on average, nine times a day. That’s one call every three hours. And taxpayers are settling the bill. In the past four years, the vast majority of Walmart calls, about 40 percent, involved suspected theft. Only 8 percent dealt with violence or some kind of disturbance.Columbia police recognized the problem in July and stopped responding to misdemeanor shoplifting calls if the suspect had already left the store. “Just with that subtle change, we’ve been able to see a difference,” he said. Now, officers are responding to roughly 20 percent fewer incidents of Walmart shoplifting, he said. But some question whether that goes far enough. Walmart representatives recognize the problem, too, saying the company has invested millions in people, programs and technology to police their own stores.   [Source: The State] Stay UpdatedGet critical information for loss prevention professionals, security and retail management delivered right to your inbox.  Sign up nowlast_img read more

admin

Former Army soldier kills wife with iron Buddha statue in Bengaluru son

first_imgGetty ImagesA former Indian Army soldier killed his wife by hitting her with a Buddha statue at their house in Bengaluru on March 14. The 59-year-old had served in the Indian Army for 20 years and was presently working as a conductor with Bangalore Metropolitan Transport Corporation.Javare Gowda’s younger son Chandan has lodged a complaint with the police. Chandan repeatedly called his mother Manjula on March 14 and when she didn’t respond, he lodged a complaint with the police. He told the police that his parents fight because of his father’s drinking problem, which may have led to the murder.Chandan said that his father developed alcohol addiction after retirement. Gowda would often ask his wife for money to buy alcohol. “On March 14, Manjula refused to pay him money for buying alcohol. In a fit of rage, Gowda took an iron Buddha statue and hit Manjula on her head. She died at the hospital,” the police said.Gowda had earlier taken a leave from BMTC and was staying at home since February. The police officials said that Manjula had spoken with her son on the day of the incident.Their elder son Chetan, who is a software engineer, lives abroad. The police have arrested the ex-serviceman.Crime rate in Bengaluru down?As per official records with Bengaluru Police, there has been a considerable decline in criminal activities since 2016. Around 200 murder cases were reported in Bengaluru in 2018 as compared to 228 and 234 such cases in 2016 and 2017, respectively. In a majority of the cases, the police claimed to have detained the murderers. Gangster Lakshmana murdered by rivalry gang near ISKCON templetwitterAround 500 cases of cruelty by husband were registered by Bengaluru Police in 2016, which dropped down to 375 in 2018.last_img read more

admin

Mission Mangal Batla House day 4 box office collection Akshay Kumar starrer

first_imgMission Mangal and Batla House day 4 box office collectionYouTube ScreenshotMission Mangal has had a terrific first weekend at the box office with its collection going close to Rs 100 crore on its day 4. On the other side, Batla House too performed well at the commercial circuit, and inched close to Rs 50 crore by the end of SundayAkshay Kumar starrer Mission Mangal has had an excellent start at the Indian box office with a collection of Rs 29.16 crore on the holiday of Independence Day. It became Akshay’s biggest opener of all time.The film had faced a major decline in its earning on second day as the collection was almost half of the opening day business. Mission Mangal had collected Rs 17.28 crore on second day, but picked up strongly on Saturday. The film had earned Rs 23.58 crore on third day at the domestic market.Nonetheless, Mission Mangal enjoyed huge occupancy in theatres across the country on its day 4, and witnessed an impressive collection. The movie collected Rs 27.54 crore at the Indian box office on Sunday, taking its total earning close to over Rs 100 crore by the end of the first weekend.While the movie was just short of Rs 2.44 crore to reach the first milestone, Mission Mangal crossed the mark by Monday afternoon.On the other hand, John Abraham starrer Batla House too has been doing well at the ticket counters. The film had started with a collection of Rs 15.55 crore, followed by a decline on second day as it earned Rs 8.84 crore. However, it also witnessed a jump in business on Saturday as it collected Rs 10.90 crore on day 3. Batla House witnessed further growth in collection on Sunday as the film collected Rs 12.70 crore on day 4, inching close to Rs 50 crore.It will be interesting to see how the two films will perform over the weekdays.last_img read more

admin

Public money spent on dev propaganda ahead of polls

first_imgWith the elections around the bend, the government will be apprising the people of the rural areas about its achievements in the development sector over the past several years. And, as usual, public money will be used for the purpose.As part of this campaign, the directorate of mass communication, under the information ministry, has taken up a project for strengthening publicity for the development of rural communities, at a cost of around Tk 600 million. The Executive Committee of the National Economic Council (ECNEC) approved the project on 3 April.This project includes campaigning on the development achievements of the government over the last few years, aimed at giving the ruling party an upper hand in the election race. Under the project videos on development will be projected in every union. This will be accompanied by concerts, women’s meetings and free meals.Director of the mass communication directorate Jasim Uddin has said the project will be implemented fully from July.The main programme is entitled ‘Egiye Jachhe Bangladesh’ (Bangladesh is advancing).The 10 initiatives of the prime minister being highlighted under the project are ‘Ekti Bari Ekti Khamar’ (one farm for every household), Asrayan, Digital Bangladesh, education assistance programme, electricity for all, community clinics and child development, social security and more.Achievements of the government at home and abroad over the last eight years will also be highlighted in the campaign through film shows, folksong sessions, women gatherings, Facebook posts, YouTube, as well as radio and TV broadcasts.The project is of three years, spanning from November 2017 to November 2020. According to the project documents, Tk 535 million or 90 per cent of the total budget will be spent between 2 and 19 of June.Commenting on the relevance of the project, former advisor of the caretaker government AB Mirza Azizul Islam said there is no justified basis for such projects. No development is achieved through such projects. He added, such projects are taken up with the election in mind, to win votes from the people.Taking up projects before the polls is nothing new.In the current fiscal, three projects have been passed for constructing schools, colleges, mosques and temples, according to the demand of the concerned MPs. Three more such projects are in the pipeline, awaiting allocations for constructing madrasas, public toilets and market places in other relevant constituencies.The MPs of these constituencies had been allocated Tk 30 million to 50 million for road construction in their areas over the last eight years. The development campaign project is the latest inclusion.Films on the development of every union will be presented. A thousand leaflets on development will be distributed and over four and a half million leaflets will be printed at a cost of Tk 100 million.Jasim Uddin went on to say, before the commencement of the project, appointments will be made, premises will be rented and other preparations will be completed.LED screens will be set up on pickup trucks in each of the 4,554 unions to screen the films.A school compound from each union will be selected for the film shows. And 20 teams will be employed to screen the films across the country. Every day at least one show will be held, at least 20 shows in each district per month.In all, 21,360 shows will be screened. Five films costing Tk 100,000 each will be produced. Two-thirds of the total budget, that is Tk 380 million, has been allocated for this purpose.Local popular folk music like bhawaiya, gombhira, jari-sari will also be performed.A total of 9,792 events will be held across the country. The events will cost Tk 20 million. Each of the folk singer will be paid Tk 700 per programme, bringing the total cost to Tk 30 million for this segment.A woman gathering will be held at each upazila. Budget for the banner, stage decoration, participators is Tk 15,000.And those watching the development campaign films will be treated to good food, with Tk 20 million allocated for their treat.AB Mirza Azizul Islam said, people can see development for themselves if it takes place. It doesn’t require further campaigning. Moreover, there is the TV and the radio for broadcasting development. Local leaders involved with the ruling party will benefit from this project, he observed.*This report, originally published in Prothom Alo print edition, has been rewritten in English by Nusrat Nowrin.last_img read more

admin

6 Trinamool workers killed several injured

first_imgKolkata: Six Trinamool Congress workers were killed and several were left injured when attacked by supporters of the Opposition political parties on the day of the Panchayat polls on Monday. It may be mentioned that 14 Trinamool Congress workers were killed during pre-poll clashes and on the day of the election, supporters and workers of the party were attacked at different parts of the state.Partha Chatterjee, Secretary General of Trinamool Congress, said workers and supporters of the party all across the state have shown tremendous endurance despite facing gruesome aggression. Also Read – Heavy rain hits traffic, flights”Most of the people killed on Monday were workers of the Trinamool Congress. Six TMC workers were killed,” he said.It was a few hours after the poll had started that a Trinamool Congress worker raised his voice finding attempts of rigging at a polling booth at Meriganj near Kultali in South 24-Parganas. The allegation was made against the CPI(M) that they have killed the Trinamool Congress worker, identified as Arif Ali Gazi. He was shot dead from a point-blank range. Also Read – Speeding Jaguar crashes into Merc, 2 B’deshi bystanders killedThe incident led to tension in the area. Police went to the spot and brought the situation under control. Locals protested against the incident and demanded immediate arrest of the accused.Another worker of the party — Sanjit Pramanik was killed at Shantipur in Nadia. Sanjit was an MA student and was beaten up mercilessly along with two of his friends. Before he could have understood anything, several people began beating them up. They continued thrashing him up till he fell on the ground. Even bombs were hurled at him. Police went to the spot and took them to Shantipur State General Hospital where Sanjit succumbed to his injuries. The miscreants hurled bombs to flee the area realising that they would get caught as police were approaching the spot.Bhola Tapadar, a TMC worker, was shot dead at Nakashipara in Nadia. He was taken on to the terrace of a house when he was returning after giving his vote and he was shot dead from a point blank range. The victim’s family members alleged that CPI(M) workers were behind the murder of Bhola.In another incident, Krishnapada Sarkar was killed at Tehatta in Nadia during a clash that broke out near a polling station in the area. Police picket has been posted in the area to ensure that law and order situation doesn’t deteriorate in the area.Moreover, several Trinamool Congress workers suffered injuries as well.last_img read more

admin

Now accreditation mandatory for technical engineering institutions

first_imgKolkata: All the technical institutions and engineering colleges in the country will have to get accreditation in the next four years, said Prof Anil Dattatraya Sahasrabudhe, Chairman of the All India Council for Technical Education (AICTE) said.He was speaking on the sideline of “National Conference on Indian Higher Education: Quality Assurance, Accreditation and Ranking” organised by Education Promotion Society for India (EPSI) in a city hotel on Sunday. Also Read – Rain batters Kolkata, cripples normal lifeAICTE is the statutory body and a national-level council for technical education, under the Ministry of Human Resource Development.Prof Sahasrabudhe said accreditation has been made mandatory for the institutions. “We have given four years’ time to the colleges who are yet to get their accreditation. Many of the institutions are not yet prepared. Hence, the whole process of getting the accreditation will be completed in phases,” the AICTE chairman told the reporters. Also Read – Speeding Jaguar crashes into Mercedes car in Kolkata, 2 pedestrians killedHe also added: “Currently, around 15 percent of the institutions have got accreditation, while another 50 percent of the colleges must complete the process within the next two years. Hundred percent accreditation will be done within the next four years. The institutions will not get any extra facilities and funds will not also be provided to them if they do not complete the process of getting accreditation.”Prof Sahasrabudhe laid great emphasis on the quality of teaching and research. If the teachers are motivated and well trained, the quality of education will increase in leaps and bounds, the Chairman maintained. He also mentioned that four teachers training academies will be set up in Gujarat, Rajasthan, Kerala and Guwahati where teachers from various engineering colleges will be imparted training.The whole training would be divided into eight different modules. The programme will begin from the next year.Under Marga Darshan scheme introduced by the Center, various well performing institutions would be given funds to support otherneighbouring institutions helping them to enhance their infrastructure and other aspects.Current expenditure on higher education is 1.2 percent of the total GDP.last_img read more

admin

Configuring and deploying HBase Tutorial

first_img The separation of the directory structure is for the purpose of a clean separation of the HDFS block separation and to keep the configurations as simple as possible. This also allows us to do a proper maintenance. Now, let’s move towards changing the setup for hdfs; the file location will be /u/HBase B/hadoop-2.2.0/etc/hadoop/hdfs-site.xml. Add these properties in hdfs-site.xml: For NameNode:         dfs.name.dir          dfs.data.dir          HBase is inspired by the Google big table architecture, and is fundamentally a non-relational, open source, and column-oriented distributed NoSQL. Written in Java, it is designed and developed by many engineers under the framework of Apache Software Foundation. Architecturally it sits on Apache Hadoop and runs by using Hadoop Distributed File System (HDFS) as its foundation. It is a column-oriented database, empowered by a fault-tolerant distributed file structure known as HDFS. In addition to this, it also provides very advanced features, such as auto sharding, load-balancing, in-memory caching, replication, compression, near real-time lookups, strong consistency (using multi-version). It uses the latest concepts of block cache and bloom filter to provide faster response to online/real-time request. It supports multiple clients running on heterogeneous platforms by providing user-friendly APIs. In this tutorial, we will discuss how to effectively set up mid and large size HBase cluster on top of Hadoop/HDFS framework. We will also help you set up HBase on a fully distributed cluster. For cluster setup, we will consider REH (RedHat Enterprise-6.2 Linux 64 bit); for the setup we will be using six nodes. This article is an excerpt taken from the book ‘HBase High Performance Cookbook’ written by Ruchir Choudhry. This book provides a solid understanding of the HBase basics. Let’s get started! Configuring and deploying Hbase Before we start HBase in fully distributed mode, we will be setting up first Hadoop-2.2.0 in a distributed mode, and then on top of Hadoop cluster we will set up HBase because HBase stores data in HDFS. Getting Ready The first step will be to create a directory at user/u/HBase B and download the tar file from the location given later. The location can be local, mount points or in cloud environments; it can be block storage: wget wget –b http://apache.mirrors.pair.com/hadoop/common/hadoop-2.2.0/hadoop-2.2.0.tar.gz This –b option will download the tar file as a background process. The output will be piped to wget-log. You can tail this log file using tail -200f wget-log. Untar it using the following commands: tar -xzvf hadoop-2.2.0.tar.gz This is used to untar the file in a folder hadoop-2.2.0 in your current diectory location. Once the untar process is done, for clarity it’s recommended use two different folders one for NameNode and other for DataNode. I am assuming app is a user and app is a group on a Linux platform which has access to read/write/execute access to the locations, if not please create a user app and group app if you have sudo su – or root/admin access, in case you don’t have please ask your administrator to create this user and group for you in all the nodes and directorates you will be accessing. To keep the NameNodeData and the DataNodeData for clarity let’s create two folders by using the following command, inside /u/HBase B: Mkdir NameNodeData DataNodeData NameNodeData will have the data which is used by the name nodes and DataNodeData will have the data which will be used by the data nodes: ls –ltr will show the below results.drwxrwxr-x 2 app app  4096 Jun 19 22:22 NameNodeDatadrwxrwxr-x 2 app app  4096 Jun 19 22:22 DataNodeData-bash-4.1$ pwd/u/HBase B/hadoop-2.2.0-bash-4.1$ ls -ltrtotal 60Kdrwxr-xr-x 2 app app 4.0K Mar 31 08:49 bindrwxrwxr-x 2 app app 4.0K Jun 19 22:22 DataNodeDatadrwxr-xr-x 3 app app 4.0K Mar 31 08:49 etc The steps in choosing Hadoop cluster are: Hardware details required for it Software required to do the setup OS required to do the setup Configuration steps HDFS core architecture is based on master/slave, where an HDFS cluster comprises of solo NameNode, which is essentially used as a master node, and owns the accountability for that orchestrating, handling the file system, namespace, and controling access to files by client. It performs this task by storing all the modifications to the underlying file system and propagates these changes as logs, appends to the native file system files, and edits. SecondaryNameNode is designed to merge the fsimage and the edits log files regularly and controls the size of edit logs to an acceptable limit. In a true cluster/distributed environment, it runs on a different machine. It works as a checkpoint in HDFS. We will require the following for the NameNode: Components Details Used for nodes/systems Operating System Redhat-6.2 Linux  x86_64 GNU/Linux, or other standard linux kernel. All the setup for Hadoop/HBase and other components used Hardware /CPUS 16 to 32 CPU cores NameNode/Secondary NameNode 2 quad-hex-/octo-core CPU DataNodes Hardware/RAM 128 to 256 GB, In special caes 128 GB to 512 GB RAM NameNode/Secondary NameNodes 128 GB -512 GB of RAM DataNodes Hardware/storage It’s pivotal to have NameNode server on robust and reliable storage platform as it responsible for many key activities like edit-log journaling. As the importance of these machines are very high and the NameNodes plays a central role in orchestrating everything,thus RAID or any robust storage device is acceptable. NameNode/Secondary Namenodes 2 to 4 TB hard disk in a JBOD DataNodes RAID is nothing but a random access inexpensive drive or independent disk. There are many levels of RAID drives, but for master or a NameNode, RAID 1 will be enough. JBOD stands for Just a bunch of Disk. The design is to have multiple hard drives stacked over each other with no redundancy. The calling software needs to take care of the failure and redundancy. In essence, it works as a single logical volume: Before we start for the cluster setup, a quick recap of the Hadoop setup is essential with brief descriptions. How to do it Let’s create a directory where you will have all the software components to be downloaded: For the simplicity, let’s take it as /u/HBase B. Create different users for different purposes. The format will be as follows user/group, this is essentially required to differentiate different roles for specific purposes: Hdfs/hadoop is for handling Hadoop-related setupYarn/hadoop is for yarn related setupHBase /hadoopPig/hadoopHive/hadoopZookeeper/hadoopHcat/hadoop Set up directories for Hadoop cluster. Let’s assume /u as a shared mount point. We can create specific directories that will be used for specific purposes. Please make sure that you have adequate privileges on the folder to add, edit, and execute commands. Also, you must set up password less communication between different machines like from name node to the data node and from HBase master to all the region server nodes. Once the earlier-mentioned structure is created; we can download the tar files from the following locations: -bash-4.1$ ls -ltrtotal 32drwxr-xr-x  9 app app 4096 hadoop-2.2.0drwxr-xr-x 10 app app 4096 zookeeper-3.4.6drwxr-xr-x 15 app app 4096 pig-0.12.1drwxrwxr-x  7 app app 4096 HBase -0.98.3-hadoop2drwxrwxr-x  8 app app 4096 apache-hive-0.13.1-bindrwxrwxr-x  7 app app 4096 Jun 30 01:04 mahout-distribution-0.9 You can download these tar files from the following location: wget –o https://archive.apache.org/dist/HBase /HBase -0.98.3/HBase -0.98.3-hadoop1-bin.tar.gzwget -o https://www.apache.org/dist/zookeeper/zookeeper-3.4.6/zookeeper-3.4.6.tar.gzwget –o https://archive.apache.org/dist/mahout/0.9/mahout-distribution-0.9.tar.gzwget –o https://archive.apache.org/dist/hive/hive-0.13.1/apache-hive-0.13.1-bin.tar.gzwget -o https://archive.apache.org/dist/pig/pig-0.12.1/pig-0.12.1.tar.gz Here, we will list the procedure to achieve the end result of the recipe. This section will follow a numbered bullet form. We do not need to give the reason that we are following a procedure. Numbered single sentences would do fine. Let’s assume that there is a /u directory and you have downloaded the entire stack of software from: /u/HBase B/hadoop-2.2.0/etc/hadoop/ and look for the file core-site.xml. Place the following lines in this configuration file:    fs.default.name   hdfs://addressofbsdnsofmynamenode-hadoop:9001 You can specify a port that you want to use, and it should not clash with the ports that are already in use by the system for various purposes. Save the file. This helps us create a master /NameNode. Now, let’s move to set up SecondryNodes, let’s edit /u/HBase B/hadoop-2.2.0/etc/hadoop/ and look for the file core-site.xml:  fs.defaultFS hdfs://custome location of your hdfs          fs.checkpoint.dir          /u/HBase B/dn001/hadoop/hdf/secdn       /u/HBase B/dn002/hadoop/hdfs/secdn We can go for the https setup for the NameNode too, but let’s keep it optional for now: Let’s set up the yarn resource manager: Let’s look for Yarn setup: /u/HBase B/hadoop-2.2.0/etc/hadoop/ yarn-site.xml For resource tracker a part of yarn resource manager:  yarn.yourresourcemanager.resourcetracker.addressyouryarnresourcemanager.full.hostname:8025 For resource schedule part of yarn resource scheduler: yarn.yourresourcemanager.scheduler.addressyourresourcemanager.full.hostname:8030 For scheduler address: yarn.yourresourcemanager.addressyourresourcemanager.full.hostname:8050 For scheduler admin address: yarn.yourresourcemanager.admin.addressyourresourcemanager.full.hostname:8041 To set up a local dir:         yarn.yournodemanager.local-dirs         /u/HBase /dnn01/hadoop/hdfs /yarn,/u/HBase B/dnn02/hadoop/hdfs/yarn     To set up a log location: yarn.yournodemanager.logdirs /u/HBase B/var/log/hadoop/yarn This completes the configuration changes required for Yarn. Now, let’s make the changes for Map reduce: Let’s open the mapred-site.xml: /u/HBase B/hadoop-2.2.0/etc/hadoop/mapred-site.xml Now, let’s place this property configuration setup in the mapred-site.xml and place it between the following: mapreduce.yourjobhistory.addressyourjobhistoryserver.full.hostname:10020 Once we have configured Map reduce job history details, we can move on to configure HBase . Let’s go to this path /u/HBase B/HBase -0.98.3-hadoop2/conf and open HBase -site.xml. You will see a template having the following: We need to add the following lines between the starting and ending tags: HBase .rootdirhdfs://HBase .yournamenode.full.hostname:8020/apps/HBase /dataHBase .yourmaster.info.bindAddress$HBase .yourmaster.full.hostname This competes the HBase changes. ZooKeeper: Now, let’s focus on the setup of ZooKeeper. In distributed env, let’s go to this location and rename the zoo_sample.cfg to zoo.cfg: /u/HBase B/zookeeper-3.4.6/conf Open zoo.cfg by vi zoo.cfg and place the details as follows; this will create two instances of zookeeper on different ports: yourzooKeeperserver.1=zoo1:2888:3888yourZooKeeperserver.2=zoo2:2888:3888 If you want to test this setup locally, please use different port combinations. In a production-like setup as mentioned earlier, yourzooKeeperserver.1=zoo1:2888:3888 is server.id=host:port:port: yourzooKeeperserver.1= server.idzoo1=host2888=port3888=port Atomic broadcasting is an atomic messaging system that keeps all the servers in sync and provides reliable delivery, total order, casual order, and so on. Region servers: Before concluding it, let’s go through the region server setup process. Go to this folder /u/HBase B/HBase -0.98.3-hadoop2/conf and edit the regionserver file. Specify the region servers accordingly: RegionServer1 RegionServer2 RegionServer3 RegionServer4 RegionServer1 equal to the IP or fully qualified CNAME of 1 Region server. You can have as many region servers (1. N=4 in our case), but its CNAME and mapping in the region server file need to be different. Copy all the configuration files of HBase and ZooKeeper to the relative host dedicated for HBase and ZooKeeper. As the setup is in a fully distributed cluster mode, we will be using a different host for HBase and its components and a dedicated host for ZooKeeper. Next, we validate the setup we’ve worked on by adding the following to the bashrc, this will make sure later we are able to configure the NameNode as expected: It preferred to use it in your profile, essentially /etc/profile; this will make sure the shell which is used is only impacted. Now let’s format NameNode: Sudo su $HDFS_USER/u/HBase B/hadoop-2.2.0/bin/hadoop namenode -format HDFS is implemented on the existing local file system of your cluster. When you want to start the Hadoop setup first time you need to start with a clean slate and hence any existing data needs to be formatted and erased. Before formatting we need to take care of the following. Check whether there is a Hadoop cluster running and using the same HDFS; if it’s done accidentally all the data will be lost. /u/HBase B/hadoop-2.2.0/sbin/hadoop-daemon.sh –config$HADOOP_CONF_DIR start namenode Now let’s go to the SecondryNodes: Sudo su $HDFS_USER/u/HBase B/hadoop-2.2.0/sbin/hadoop-daemon.sh –config $HADOOP_CONF_DIR start secondarynamenode Repeating the same procedure in DataNode: Sudo su $HDFS_USER/u/HBase B/hadoop-2.2.0/sbin/hadoop-daemon.sh –config $HADOOP_CONF_DIR start datanodeTest 01> See if you can reach from your browser http://namenode.full.hostname:50070: Test 02> sudo su $HDFS_USER touch /tmp/hello.txt Now, hello.txt file will be created in tmp location: /u/HBase B/hadoop-2.2.0/bin/hadoop dfs  -mkdir -p /app/u/HBase B/hadoop-2.2.0/bin/hadoop dfs  -mkdir -p /app/apphduserThis will create a specific directory for this application user in the HDFS FileSystem location(/app/apphduser)/u/HBase B/hadoop-2.2.0/bin/hadoop dfs -copyFromLocal /tmp/hello.txt /app/apphduser/u/HBase B/hadoop-2.2.0/bin/hadoop dfs –ls /app/apphduser apphduser is a dirctory which is created in hdfs for a specific user. So that the data is sepreated based on the users, in a true production env many users will be using it. You can also use hdfs dfs –ls / commands if it shows hadoop command as depricated. You must see hello.txt once the command executes: Test 03> Browse http://datanode.full.hostname:50075/browseDirectory.jsp?namenodeInfoPort=50070&dir=/&nnaddr=$datanode.full.hostname:8020 It is important to change the data host name and other parameters accordingly. You should see the details on the DataNode. Once you hit the preceding URL you will get the following screenshot: On the command line it will be as follows: Validate Yarn/MapReduce setup and execute this command from the resource manager: /u/HBase B/hadoop-2.2.0/sbin/yarn-daemon.sh –config $HADOOP_CONF_DIR start resourcemanager Execute the following command from NodeManager: /u/HBase B/hadoop-2.2.0/sbin /yarn-daemon.sh –config$HADOOP_CONF_DIR start nodemanager Executing the following commands will create the directories in the hdfs and apply the respective access rights: Cd u/HBase B/hadoop-2.2.0/binhadoop fs -mkdir /app-logs // creates the dir in HDFShadoop fs -chown $YARN_USER /app-logs //changes the ownershiphadoop fs -chmod 1777 /app-logs // explained in the note sectionExecute MapReduce Start jobhistory servers: /u/HBase B/hadoop-2.2.0/sbin/mr-jobhistory-daemon.sh start historyserver –config $HADOOP_CONF_DIR Let’s have a few tests to be sure we have configured properly: Test 01: From the browser or from curl use the link to browse: http://yourresourcemanager.full.hostname:8088/. Test 02: Sudo su $HDFS_USER/u/HBase B/hadoop-2.2.0/bin/hadoop jar /u/HBase B/hadoop-2.2.0/hadoop-mapreduce/hadoop-mapreduce-examples-2.0.2.1-alpha.jar teragen 100 /test/10gsort/input/u/HBase B/hadoop-2.2.0/bin/hadoop jar /u/HBase B/hadoop-2.2.0/hadoop-mapreduce/hadoop-mapreduce-examples-2.0.2.1-alpha.jar Validate the HBase setup: Login as $HDFS_USER/u/HBase B/hadoop-2.2.0/bin/hadoop fs –mkdir -p /apps/HBase/u/HBase B/hadoop-2.2.0/bin/hadoop fs –chown app:app –R  /apps/HBase Now login as $HBase _USER: /u/HBase B/HBase -0.98.3-hadoop2/bin/HBase -daemon.sh –-config $HBase _CONF_DIR start master This command will start the master node. Now let’s move to HBase Region server nodes: /u/HBase B/HBase -0.98.3-hadoop2/bin/HBase -daemon.sh –-config $HBase _CONF_DIR start regionserver This command will start the regionservers: For a single machine, direct sudo ./HBase master start can also be used. Please check the logs in case of any logs at this location /opt/HBase B/HBase -0.98.5-hadoop2/logs. You can check the log files and check for any errors: Now let’s login using: Sudo su- $HBase _USER/u/HBase B/HBase -0.98.3-hadoop2/bin/HBase shell We will connect HBase to the master. Validate the ZooKeeper setup. If you want to use an external zookeeper, make sure there is no internal HBase based zookeeper running while working with the external zookeeper or existing zookeeper and is not managed by HBase : For this you have to edit /opt/HBase B/HBase -0.98.5-hadoop2/conf/ HBase -env.sh. Change the following statement (HBase _MANAGES_ZK=false): # Tell HBase whether it should manage its own instance of Zookeeper or not. export HBase _MANAGES_ZK=true. Once this is done we can add zoo.cfg to HBase ‘s CLASSPATH. HBase looks into zoo.cfg as a default lookup for configurations dataDir=/opt/HBase B/zookeeper-3.4.6/zooData # this is the place where the zooData will be present server.1=172.28.182.45:2888:3888 # IP and port for server 01 server.2=172.29.75.37:4888:5888 # IP and port for server 02 You can edit the log4j.properties file which is located at /opt/HBase B/zookeeper-3.4.6/conf and point the location where you want to keep the logs. # Define some default values that can be overridden by system properties: zookeeper.root.logger=INFO, CONSOLEzookeeper.console.threshold=INFOzookeeper.log.dir=.zookeeper.log.file=zookeeper.logzookeeper.log.threshold=DEBUGzookeeper.tracelog.dir=. # you can specify the location herezookeeper.tracelog.file=zookeeper_trace.log Once this is done you start zookeeper with the following command: -bash-4.1$ sudo /u/HBase B/zookeeper-3.4.6/bin/zkServer.sh startStarting zookeeper … STARTED You can also pipe the log to the ZooKeeper logs: /u/logs//u/HBase B/zookeeper-3.4.6/zoo.out 2>&1 2 : refers to the second file descriptor for the process, that is stderr. > : means re-direct&1:  means the target of the rediretion should be the same location as the first file descriptor i.e stdout How it works Sizing of the environment is very critical for the success of any project, and it’s a very complex task to optimize it to the needs. We dissect it into two parts, master and slave setup. We can divide it in the following parts: Master-NameNodeMaster-Secondary NameNodeMaster-JobtrackerMaster-Yarn Resource ManagerMaster-HBase MasterSlave-DataNodeSlave-Map Reduce TasktrackerSlave-Yarn Node ManagerSlave-HBase Region server NameNode: The architecture of Hadoop provides us a capability to set up a fully fault tolerant/high availability Hadoop/HBase cluster. In doing so, it requires a master and slave setup. In a fully HA setup, nodes are configured in active passive way; one node is always active at any given point of time and the other node remains as passive. Active node is the one interacting with the clients and works as a coordinator to the clients. The other standby node keeps itself synchronized with the active node and to keep the state intact and live, so that in case of failover it is ready to take the load without any downtime. Now we have to make sure that when the passive node comes up in the event of a failure, the passive node is in perfect sync with the active node, which is currently taking the traffic. This is done by Journal Nodes(JNs), these Journal Nodes use daemon threads to keep the primary and sercodry in perfect sync. Journal Node: By design, JournalNodes will only have single NameNode acting as a active/primary to be a writer at a time. In case of failure of the active/primary, the passive NameNode immediately takes the charge and transforms itself as active, this essentially means this newly active node starts writing to Journal Nodes. Thus it totally avoids the other NameNode to stay in active state, this also acknowledges that the newly active node work as a fail over node. JobTracker: This is an integral part of Hadoop EcoSystem. It works as a service which farms MapReduce task to specific nodes in the cluster. ResourceManager (RM): This responsibility is limited to scheduling, that is, only mediating available resources in the system between different needs for the application like registering new nodes, retiring dead nodes, it dose it by constantly monitoring the heartbeats based on the internal configuration. Due to this core design practice of explicit separation of responsibilities and clear orchestrations of modularity and with the inbuilt and robust scheduler API, This allows the resource manager to scale and support different design needs at one end, and on the other, it allows us to cater to different programming models. HBase Master: The Master server is the main orchestrator for all the region servers in the HBase cluster . Usually, it’s placed on the ZooKeeper nodes. In a real cluster configuration, you will have 5 to 6 nodes of Zookeeper. DataNode: It’s a real workhorse and does most of the heavy lifting; it runs the MapReduce Job and stores the chunks of HDFS data. The core objective of the data node was to be available on the commodity hardware and should be agnostic to the failures. It keeps some data of HDFS, and the multiple copy of the same data is sprinkled around the cluster. This makes the DataNode architecture fully fault tolerant. This is the reason a data node can have JBOD01 rather rely on the expensive RAID02. MapReduce: Jobs are run on these DataNodes in parallel as a subtask. These subtasks provides the consistent data across the cluster and stays consistent. So we learned about the HBase basics and how to configure and set it up. We set up HBase to store data in Hadoop Distributed File System. We also explored the working structure of RAID and JBOD and the differences between both filesystems. If you found this post useful, be sure to check out the book ‘HBase High Perforamnce Cookbook’ to learn more about configuring HBase in terms of administering and managing clusters as well as other concepts in HBase. Read Next Understanding the HBase Ecosystem Configuring HBase 5 Mistake Developers make when working with HBasecenter_img /u/HBase B/dnn01/hadoop/hdfs/dn,/HBase B/u/dnn02/hadoop/hdfs/dn Now, let’s go for NameNode for http address or to access using http protocol: dfs.http.addressyournamenode.full.hostname:50070dfs.secondary.http.addresssecondary.yournamenode.full.hostname:50090 For DataNode: /u/HBase B/nn01/hadoop/hdfs/nn,/u/HBase B/nn02/hadoop/hdfs/nnlast_img read more

admin

Florence now a Category 3 hurricane in Atlantic

first_imgTags: Bermuda, Hurricane, Hurricane Florence, Weather By: The Associated Press Share Thursday, September 6, 2018 MIAMI — Hurricane Florence flirted with Category 4 status last night before dropping back down to Category 3, however forecasters are warning the storm is still likely to cause “life-threatening” surf and rip current conditions in Bermuda later this week.The National Hurricane Center said the storm’s maximum sustained winds Wednesday afternoon are estimated to be 215 kph.Hurricane Florence is centred about 2,080 kilometres east-southeast of Bermuda and is moving northwest at 20 kph.Forecasters expect Florence to weaken somewhat over the next few days, but they anticipate it will still be a powerful hurricane through early next week.Officials say swells caused by Florence will begin to affect Bermuda on Friday.There are no coastal watches or warnings currently in place. << Previous PostNext Post >> Florence now a Category 3 hurricane in Atlanticlast_img read more

admin