Hanzo Archives is a cutting-edge web archiving company. Global corporations use our products and services to capture, archive, preserve, and make discoverable web-based electronically stored information (ESI) in native format. Their needs are primarily driven by eDiscovery, information governance and heritage requirements. Our operations are based in Europe and USA.
Hanzo has implemented the entire technology stack required to capture and archive the modern web with a sophisticated crawler at its core. This job is at the heart of crawler operations: to configure and manage crawls, process archived data, and interact with customers. We call this “Crawl Engineering”.
Also known as:
- DevOps with Front End Debugging
- Web Archive Operations Engineer
- Salary: Negotiable base salary plus participation in share options scheme
- Location: Home-based or office based in Edinburgh, UK or East Coast, USA
- To find out more or apply for this job, please email an intro plus your CV to Shuba Rao at email@example.com.
Agencies, please note: applications forwarded through agents will not be accepted unless you have a prior arrangement with us. We will not make such an arrangement if you contact us!
About the Company
Hanzo Archives is a cutting-edge web archiving company. Global corporations use our products and services to capture, archive, preserve, and make discoverable web-based electronically stored information (ESI) in native format. Their needs are primarily driven by eDiscovery, information governance and heritage requirements. Our customers are some of largest and most successful corporations in their industry. We currently operate in Europe and USA.
Hanzo has implemented the entire technology stack required to archive the modern web and at the core is a sophisticated crawler. This job is at the heart of crawler operations: to configure and manage crawls, process archived data, and interact with customers. We call this “Crawl Engineering”.
Reporting to the Head of Development, the Senior Crawl Engineer will primarily drive the technical aspects of the archiving operations for customers and ensure that we continue to deliver innovative and high-quality services. This will include writing software and tools to help with these tasks; running large, distributed, long-running jobs; instrumenting and metrics gathering; processing large volumes of data; managing virtual infrastructure (machines, storage).
Roles and Responsibilities:
- Run crawler operations, including configuring crawls, making probers, diagnosing and resolving issues
- Work within our process which includes monitoring SLAs, updating our issue tracking system
- Translate feedback from customers and operations into software development to enhance our product and service offerings
- Maintain and enhance existing software (both internal products and our open source projects)
- Communicate systematically and at the right time
- Work proactively, enthusiastically seeking problems in the software and systems and finding solutions
- Be responsible for completing time-critical day-to-day tasks
- Solve problems independently and as a team
Skills and Abilities Required for the Role:
- Diagnose technical problems effectively
- Work in a startup environment and work on any, sometimes disparate, tasks that need to be completed in a timely manner
- Document software rigorously
- Work with and without supervision
- Problem-solving and thinking laterally, both individually and as part of a team
- Communicate, and offer or ask for advice when needed
- Ability to actively seek problems and find solutions
- Ability to work remotely and with geographically dispersed teams
- Below are essential demonstrable personal attributes for all candidates.
- Willing to firefight
- Write quality code
- Understand and work with other people’s code
- Solve technical and operational problems
- Regular Expressions
- Unix / Linux, including scripting with tools like grep, find and awk
- In depth understanding of HTTP and web
- Write clearly
- Responsible and self-motivated
- Eager to learn, teach, and solve problems
Hanzo Archives is a world leader in commercial web archiving. We are looking for software engineers, who have been out of college for a year or two (or at least out of college and can prove themselves), to work and learn with us. Ideally you will have experience of Python or Ruby (or similar) and good knowledge of the web - you will know a web server from a router, your HTML from your RTMP.
Salary: In the region of £25k-£28k per annum.
We asked our team what the best features of the job were and here are some of the things they said:
- constant stream of annoying but interesting problems to fix
- work from couch, never have to visit a data centre
- incredible opportunity - early days in a growing start up with traction in the market place
- forefront of web crawling
- fast growing company with many challenges - high probability of your good ideas being used
- good mix of operational, development, and research work
- international travel opportunities for those who enjoy it
- E-mail contact: firstname.lastname@example.org
- Telecommuting is OK, but you should be based in the UK, our team meet up regularly
- Agencies NOTE: Applications forwarded through agents will not be accepted. Agents who call unsolicited will go onto a black-list. We will ONLY work with agencies we know and trust.