NANOG 47 / ARIN XXIV Joint Session Draft Transcript - 21 October 2009
Note: This transcript may contain errors due to errors in transcription or in formatting it for posting. Therefore, the material is presented only to assist you, and is not an authoritative representation of discussion at the meeting.
Table of Contents
- IPv6 Emerging Stories of Success
- History of IPv6 at ARIN
- Porting to Dual Stack -- Not That Hard
- IPv6 Implementation Fundamentals for ISPs
- NANOG Election Results and Closing
Dave Meyer: Good morning. Welcome to the Joint NANOG / ARIN part of the week. Great to see you all here. I was in the bar last night. Pretty good turnout considering how many people were in the bar last night.
So this is the joint part of our agenda. I'm really excited to have Mark and this panel up here. This is going to be interesting. So I just wanted to welcome you all, and let's get going here.
I wanted to give John Curran -- is he anywhere in the house? Okay. Well, John Curran is missing his 15 minutes of fame here. So with that, I think I'll turn it over to Mark. And just welcome everyone, and enjoy the program.
Mark Kosters: So this morning's session, we're going to talk about IPv6. And one of the things that we want to do is -- we have a stellar panel here -- is that we want to talk about various sort of opportunities that are going on with IPv6. And we want to talk about, hey, look, you know, IPv4 is having this run-out problem.
In two years or so, we're no longer going to have unallocated IP addresses coming from IANA through to the RIRs to you guys. And what does that mean and how is that going to work?
If that's true, why aren't we seeing more IPv6 adoption? Many have supported it. But the actual sort of traffic is very much on the low side. And I know, for example, that Matt has some slides within ARIN, how much IPv6 traffic that we see. But it's in the fraction of 1 percent territory.
So it's interesting to see how all this stuff is working, what people are doing. There's definitely work going on behind the scenes. So it's not as grim as it seems to be.
And the final part is -- what we want to do is talk about, hey, how can we compel the use of IPv6. So the first part of the panel is going to be talking about what's going on within a large ISP and the second part is going to be talking about what happens within an enterprise.
Then we're going to talk about, okay, you want to go make the move to IPv6, what sort of things do you need to do, what sort of things do you have to do with applications. We're going to talk about that. And if you're in an ISP, how can you actually do this, how can you roll out IPv6 in a useful way.
So we have two themes, as I said before. John Brzozowski is going to talk from Comcast, followed by Matt Ryanczak. And then we're going to talk about IPv6 enablement, and Owen DeLong is going to be talking and ending up with Aaron Hughes.
The first three guys, they're going to talk for 15 minutes. And Aaron will have a cookbook that he'll bring to the table and he'll talk for a longer period of time.
I'm not exactly sure how we're going to handle questions and answers, but if you feel, hey, I want to ask this question now, why don't you just come up to the mic; I'm sure we'll go ahead and get you into the queue at that point.
So without much ado, John, why don't you come up here.
John Brzozowski: Good morning, folks. Thank you. Thank you for having me. I will certainly attempt to make this as interesting as possible for you. I really prefer not to hash over a lot of things that you've heard over and over again. I'd like to offer you some new perspectives, perhaps, some new things to consider as far as deploying IPv6 is concerned.
Okay. So I wanted to kind of breeze through. It's certainly a challenge to squash four and a half years', five years' worth of experience into eight slides. So I'll try to touch on highlights here.
So, you know, just as a reminder, the scope of IPv6 for us at Comcast, while it originally started off to leverage IPv6 for device management, we really have ensured our plan to adopt and to leverage IPv6 to extend to -- to enable subscriber services.
So these plans ultimately require us to have a dual stack backbone. Ours happens to be native dual stack, through and through from end to end.
We had to have dual stack capabilities in our back office, which includes support for DHCPv6 and some other kind of key protocols.
A lot of these things are either underway or soon to be complete. I don't know how much detail, but I'm assuming if there are questions on this front, you guys won't be shy and ask them.
And, of course, you know, as far as device management is concerned, when it comes to actually being able to manage devices in a cable network using IP6, there is a lot of work we had to do, everything ranging from putting the specifications in place of cable apps N number of years ago, four, five years ago, all the way through to up until today, making sure implementations are sound and tested and deployable and certified.
This fundamental concept applies not only to modems, but it applies to modems that are embedded with other types of devices that are very popular in a cable network, like an eMTA, which is used for voice, or a set-top box, which I'm sure everybody is familiar with, for video.
Ultimately we leverage all the work that -- all this work to incrementally extend those -- our network and our capabilities to offer subscriber services that are IPv6 enabled.
And, of course, we've -- along the way, we've had our lessons learned, and I'll touch a little bit on this later. I probably won't spend a whole lot of time on it unless people ask for it. But we've also had to employ some well-known transition technologies to make sure that the people who are helping to run and operate and support our deployment of IPv6 are properly enabled to do that for us.
It's not a one-man job. It's certainly a team, organizational-type effort. And we have to make sure that everybody has what they need to support this initiative and this program.
So, again, kind of repeating some of this stuff here. And I'll kind of breeze through this. The key for us initially was device management, and it's still a very important core concept to the IPv6 program of Comcast.
There are three key areas. Any one of them has a gap and I have a problem. Our core network, which is happily in good shape; our access network, which is coming along; and ultimately our back office systems. These are three key areas that we've built, three key building blocks that we've built a huge part of our program around.
And, like I said, we take these three components, we iteratively extend them and we are going to now use these to talk about how we can enable services for over IPv6.
So in our network, native is absolutely preferred wherever possible. However, as I'm sure you all can appreciate, this is not always a possibility. Particularly for us, we found this on the enterprise side. But, like I said, we've leveraged some different techniques to make sure all the right people have the right tools to do their jobs as it relates to IPv6.
We've learned a lot over a four-and-a-half, five-year period of time. And we could probably talk here for hours and perhaps even days. There's just been so much that's been learned and there's still so much more to come.
Ultimately the goal for us is to take all this experience, take all these learnings, take all this great engineering work that we've done and to kind of continue to extend it and do better things with it.
So one of the things that we still struggle with from a challenge point of view is business as usual for IPv6. There's some folks who live, breathe, sleep IPv6. It is a part of their DNA from a work point of view.
However, there are other fronts where this still remains a challenge. I'm sure you all can appreciate that, and this is something that's kind of an ongoing type of situation where those of us who are passionate about what it is that we do, we have to kind of be vigilant and make sure that we continue to remind people that IPv6 is very important. It is the future for us.
And simply avoiding it, and I've heard that message on occasion, that's just not a good idea. I don't know what else to say about that, but when I hear people talk about we're not going to do IPv6, I kind of like ponder and wonder how that could be. But that's perhaps my point of view or my perception.
One of the other things that we found that has been extremely, extremely important is large-scale and interoperability testing. Testing isn't always the most sexy work to do, unfortunately. But it is ever so important.
And particularly in a deployment like ours where we have millions of devices that could potentially be operating in IPv6 mode, making sure that not only we test key points of the network, where many of these devices connect en masse, and they all have to interoperate with each other because it's such a diverse -- when you have that kind of population, there's just -- the diversity is so huge.
This is very important. And for those of you who have large networks where you have to support similar types of scenarios, please don't overlook that.
Of course, a lot of what we've done is we've benefited from the opportunities that other technology initiatives have offered to us. DOCSIS 3.0 and some of the higher bandwidth speeds that we've been able to offer our subscribers. A lot of those upgrades are things that benefit IPv6 directly. A lot of the network upgrades that have to happen, particularly at the edge, I essentially get IPv6 as part of that. And I would encourage you to find opportunities that are just like that in your own environments, because they're priceless and you don't want to leave them on the table.
So -- but given the business priorities of some of these other initiatives, we, like every other company, have to manage the importance of IPv6 and the availability and the deployment and enablement with other business objectives. For example, there's often competition between features for channel bonding versus IPv6.
And this is probably no different for me than it is for some of you in your day-to-day, and this is an ongoing struggle. I'm not going to sit here and tell you it's easy, it just works and I just get everything that we need and we're good to go. It doesn't work that way.
All right. And I'm not trying to dissuade you or discourage you; I'm just trying to offer you a little taste of my reality.
But we are persistent in our efforts to make sure that all the right things happen, keeping the best interests of the business at heart all the -- along the way the entire time.
So -- and even with that we find that there are places today where we still have different components or vendors that build different components that come back to us to say, You're the only one who asked for it, or, you know, whatever the song and dance is.
And these are key areas that, moving forward, as we talk about service enablement for IPv6, are very important, particularly in the area of security. Whether it's IDS or IDP, vulnerability assessment, these are key areas that I think folks need to pay attention to, because delays here could potentially slow us all down from an adoption point of view, particularly as it relates to offering each and every one of you guys IPv6.
Something that one bullet doesn't do justice to is the back office. For us, the back office was probably one of the most significant aspects of our deployment. It incorporated several hundred servers, north of 100 applications, some homegrown, some vendor. It was just -- it was a massive, massive effort for us.
We are getting to a point now where our entire back office infrastructure is -- we have all the components and it's nearly deployed and ready to support IP6 across the entire footprint. This was no trivial undertaking. This was substantial.
We had to combat everything from negotiations with vendors all the way through to the guy who wrote the code, left the company five years ago, and the API he used doesn't support IP6. We've been on the entire spectrum of challenges there, but we did it.
So I went fast through the front end of this because I want to talk a little bit about what data services or subscriber services might look for our cable subscriber. So as I mentioned in the beginning, our preferred approach is to offer folks native dual stacked IPv6 services.
So you have IPv4 today. Looks and smells the way it does today, still going to look like that tomorrow, we'd just like to add IPv6 to it.
So there are two models where that applies where you have a directly connected device, like your Mac or your Vista, into a cable modem or you have a home gateway that supports IPv6. And you have different devices in your house connected to it that support IPv6, as well, like your Mac or your Vista.
In both cases everything that I talked about already -- the core, the access, the back office -- all these things have to be in place. Without any of those things, I can't offer that to people.
So here are some hopefully pretty pictures of what a directly connected TP would look like, for example.
Anybody willing to volunteer whether or not they're connected to the Internet like this at home? I'm guessing everybody in this room probably not. Does this look more like what most people use here? You have a single?
Owen DeLong: Pretty close.
John Brzozowski: You don't have a home gateway?
Owen DeLong: Well, I have a home router, but CPE is my router.
John Brzozowski: But I'm talking like when you go home, do you plug your Mac directly into your modem?
Owen DeLong: No. I plug my Mac directly into my Ethernet that goes to my 7204.
John Brzozowski: Right. So you're here. You're this picture.
Owen DeLong: Okay, yes.
John Brzozowski: Just a straw poll, looking for a show of hands. I'm going to assume most everybody in the room kind of subscribes to a model that looks and smells more like this.
So we'll have a lot of talk about here in a few minutes, and probably not a whole lot of surprises for you. But for the vast majority of retail-type home gateway devices, you may or may not be shocked to know that they don't support IP6.
But you'll also be happy to know that people like myself and other folks are working very hard with that vendor community to make sure that there are retail-type devices that support IPv6 natively in a dual stack fashion that you'll be able to buy.
This is very important. It doesn't address the problem that there's still millions of them out there that don't support IP6. We'll have to figure out how to solve that one later, and that's the second point here.
One bullet I left in here is kind of more or less my problem, not yours. But when we do delegated prefixes, there is the challenge of making sure that routability to your prefix is in place all the time. Delegating you a kind of a residential subscriber or commercial subscriber source poses different challenges for providers when it comes to making sure the connectivity and routability to that delegated prefix is persistent, and it's accurate and up to date.
We can talk a little bit about this later or offline, if necessary.
So kind of in closing here, because I know that I'm running out of time, there are -- on some fronts IPv6 is still up and coming and evolving. But the fact of the matter is that we have enough to do the things that we do to build services around them.
If you guys went to NANOG in Philly and you saw the cable demo that Comcast put together, that was not stuff that was brought in specifically for that demonstration. They were samplings of things that were real from our network. Perhaps in a smaller scale because we had squashed into a conference room, but they were real.
Hopefully in the not-too-distant future we'll have something like that again and you'll see more of an evolution of those same exact components that will hit home to you.
Testing interoperability. I cannot emphasize how important that is. And I can assure you, bugs will happen. We've had bugs in areas where we were not expecting them, popular operating systems, for example, that were IP6-specific bugs.
And of course scale makes a difference. The bigger it is, the more complicated and the more attention you'll have to pay to planning gradual enablement.
And one of the fundamental principles that we operate under is for IPv6 we cannot affect existing subscriber searches. I can't deploy and enable IPv6 to Mark and disrupt every one of his neighbors and their IPv4 service. I can't do that. It's not an option. Right? I'm assuming similar for you.
And kind of in closing here, I'll talk a little bit about content. Content is a very important piece to the equation here. A number of things, including the availability of it and how do you actually announce the availability of it, meaning the DNS, these are all things that are very important. And we have to make sure that the right people who can help get content available over IPv6 are doing very similar things to what we're all doing as far as making the connectivity available.
And that's all I have. I think I'm about out of time. So thank you.
Mark Kosters: So thank you very much, John. The next speaker is going to be Matt Ryanczak, who is going to talk about IPv6 into the enterprise and focus in on what's going on at ARIN.
Matt Ryanczak: Hi. I'm Matt Ryanczak and I'm the Operations Manager at ARIN. Before I get started, a little story about our network, I guess, and what we're like.
We aren't a huge network. In fact, about a year or two ago someone asked me if we even had a network and at first I was kind of offended by that question. But, as I thought about it, we're not an ISP, we aren't a huge enterprise, we're not a Google or Microsoft or anything like that. But we probably look a whole lot like a lot of ISP's customers. We're about a 50-person company. We're multihomed. We have some leaf networks that served specific purposes. So in fact our network is kind of relevant, I guess. And when it comes to IPv6 it certainly is. We've been running IPv6 since 2003. Started with the Sprint circuit. It was basically a beta, had a lot of issues. Path MTU discovery, we know it was partially tunneled. It was in fact a T1 line.
So it appeared native to us, but once it got into the Sprint network, I think it was tunneled here and there and we had a lot of issues with that. Certainly routing issues. They weren't well connected at the time to the IPv6 Internet. The IPv6 Internet I think itself wasn't very well connected in general.
In 2004 we got a work-on circuit. Similar issues there. It wasn't until 2006 that we really started to see a production-ready IPv6 network at ARIN, and that was when we joined an exchange and we were able to get transit there and peer directly with people.
And last year we started to build what I consider to be the true production network at ARIN that's serving real production services, including DNS and WHOIS and routing registry stuff. It's also built like a real production network to serve high-volume datas.
So kind of give you a quick overview of each network and some of the challenges that we had with them, as well as some of the lessons that we learned towards the end of this presentation. In 2003, as I mentioned, it was a T1 line via Sprint. It was simple and very ad hoc and beta. We used a Linux router with a Sangoma T1 card. We used an OpenBSD firewall because at the time Linux didn't have stateful packet-filtering as a firewall. We used Linux-based Web servers and DNS, FTP servers.
It was a completely segregated network. And I think this is important, because common practice today, I believe, is that you dual stack everything. Back then there was a lot of security concerns about people stack mashing IPv6 the way that they did IPv4 back in the '90s and potentially compromising your network or causing DDoS text or whatever.
So we completely segregated IPv6. I guess you could say we were kind of afraid of it. Things have changed a lot since then. We've learned a lot.
And as I mentioned earlier, we had a lot of issues with path MTU discovery and packets just dying. Some of that stuff was originating with us and the way our Web servers were configured. Some of it was with hosts that were trying to get to us, routers in between. Lots of communication back and forth with people in various locations around the world that just couldn't get to us.
It was a great learning exercise. So it was definitely worth it. We actually have the Sprint circuit still in service today. We're in the process of decommissioning it. We don't use it for any production data, but we've used it for testing and other things for a long time.
One other thing to note about this is Sprint support was always really good. I have to commend them for trying to help us get something that barely anybody knew how to use to work. They were really, really good about that.
In 2004, to kind of complement the Sprint circuit, we got a Worldcom circuit. This circuit was I believe part of Vint Cerf's kind of test IPv6 network that we had at Worldcom at the time.
We used a real router this time. We stuck with OpenBSD firewalls because they just work really well, frankly. Their IPv6 support is excellent. PF is a great way to provide firewalling. Traffic is low enough that we didn't have any sort of issues with interrupts or load on commodity hardware. So it wasn't a problem for us.
And basically what we did was we duplicated the services we had on the Sprint line. This gave us a little bit of redundancy and gave us a different path on the IPv6 Internet that we could use to compare and contrast, basically. If we had problems on Sprint, could we replicate them on Worldcom or -- and I think we were probably really good for both of these companies to test their networks. We were good beta testers at the time.
Had a lot of similar issues with the Sprint link. A lot of path MTU discovery issues related to toggling. Having issues routing to certain places, especially Europe. There were places in Europe that just couldn't get to us at all. And that was a problem for serving production data, particularly at DNS.
We were at the time one of the /8 servers, had a IPv6 address on this network, and our website also had a IPv6 address on this network. And when people can't get to your website, that's bad.
Luckily we don't necessarily make a whole lot of money from our website, so it was worth kind of going through the process of troubleshooting these issues with people. And generally we could get it to work, one way or another, or people would work with us and they would just start going to us over IPv4 or whatever. So it really was, again, a really good exercise.
In 2006 we joined Equi6IX. I believe Louie Lee from NANOG and ARIN fame recommended this to us. It was in beta at the time. It was completely free. And we were already in Equinix, so this was kind of a no-brainer.
We got a 100 meg Ethernet connection. OCCAID offered this transit, so we got our transit through them and began peering with people. And I think immediately HDNet and several other vendors came forward and we peered with them and things started to really kind of look like a production network.
We still had a firewall in place. We still offered the same services, but the service level was much, much better than it had ever been before. All of our issues with path MTU discovery and all of the routing issues just went away.
And it was still a segregated network. That's important. I think that kind of shows we're still afraid of IPv6 a little bit. And we eventually did introduce some dual-stack stuff, mainly out of convenience. And you started to see recommendations and best practices come around that said, you know, dual stacking isn't so bad. If you set up your host properly, it's going to be okay. And that's generally what we found to be true.
As we got into 2008, Mark Kosters joined us about six months before that, and we began a big push to kind of reimagine our production services network, something we called public-facing services. And as we did this, we built two brand-new networks, basically: one on the West Coast, one on the East Coast.
And the concept here was that we would host all of our public-facing services -- DNS, Web, FTP, WHOIS, RWhois, everything out there -- to separate the provisioning side from the public side a little bit.
And the two networks work quite a bit different. This allowed us to focus on performance where we needed it and security where we needed it and things like that. So we built two networks: one uses NTT for transit. and one uses TiNet.
Got rid of the firewalls. Went to larger Cisco routers that could handle higher packet rates for our DNS traffic. We used Foundry load balancers. At the time that we installed these, they didn't even have IPv6 support in beta actually and we used 6 to 4 to provide IPv6 connectivity.
Shortly after that, probably six months, they came out with beta code and we took the plunge and deployed it in production on one of these networks and it actually worked pretty well.
We had some issues, but Foundry was actually really good, very responsive. They were giving us releases and fixes as we found problems. And now we have a stable network running IPv6 all the way through from end to end, which is just really great.
Have yet to really experience any major issues. And as we found them, Foundry has been excellent with helping us resolve them. We find that basically for the most part these days it's configuration issues on our side. We're still learning how to best do this stuff.
So their support has been great. Today DNS is on there. We actually have some of the ARIN /8s hosted on these networks. We have two different servers.
We are just about to deploy a routing registry pilot with newer routing registry code that has full IPv6 support. WHOIS is out there and more services are coming on line out there all the time. This network is completely dual stacked and both networks are both standalone.
So how much IPv6 traffic do we see? These are -- the pie charts are kind of bogus. I did them in Visio. It's worth pointing out that the numbers are real. WHOIS, we see about .12 percent. That's not much, really, but it's something. And DNS we actually see .55 percent, which really surprises me. I wouldn't have thought it would be this high, but it's actually quite high.
And then the really surprising one is our website: www.arin.net actually sees around 8-percent of its traffic for 2009 over IPv6. This boggled my mind actually, and I had somebody else check my numbers. I said this was not possible. It turns out that probably 99 percent of that 8 percent is in ARIN traffic. Because we're dual stacked internally.
We have a lot of the hosts are IPv6 enabled with the precedent of quad A records and DNS lookups, a lot of the internal users browser website over IPv6. They don't really know that, but they do.
So one of these days I'll do stats and see what it really is. I thought that was amusing.
Some of the lessons we learned. Tunnels are not really desirable. I have an AT-Net tunnel at home. It works great. I probably couldn't run a business off of that because people would complain constantly.
It works really well for what I use it for, but when we tried to use it for our production services, endless problems, especially with path MTU discovery stuff. I mean, packets falling on the floor here and there. Not all transit is equal. I think this problem is probably not as bad as it used to be.
People are learning how to do this stuff really, really well, but certainly when we started looking at this we had problems. Routing is not as reliable.
Today we still routing issues on the IPv6 Internet that you would never see on the IPv4 Internet. I think that's just people are still learning. The backbones and whatnot are not quite as good as the IPv4 backbones. But I think it's getting there.
Dual stack certainly is not so bad. We haven't had any security incidents that we know of. I guess back in the '90s when the ping of death came out and stuff like that, people weren't expecting that either. But we're hopeful that Linux and Windows and these vendors have good stacks and have vetted that stuff. They've learned lessons from the past. But I guess we'll see. I think really the worst case is probably DOS conditions anyways.
Proxies are good. We use six tunnels to provide services to, you know, IPv6 services for IPv4-only demons that we run. That includes our current WHOIS servers, the RWhois and the older routing registry that will soon be native IPv6 enabled soon.
Also people fear 4-byte ASN. I put this in here because our two new networks that we have on the East and West Coast we run 4-byte ASs. We peer with people and we have people that -- well, they're afraid to peer with us because they're not really sure what will happen. They haven't tested their gear. They know they need to upgrade.
The compatibility mode that exists in routers today will probably screw up their provisioning systems because they don't want to see two, three, four, five in their AS path. So we have had some people take the plunge and it's great and we're hoping to see more people do that.
Data support is of course better. DHCPv6 is not well supported internally. We don't run it. We don't have Vista deployed heavily. We don't have Windows 7 deployed heavily. And Macs and other boxes, they don't support it, at least not without a whole lot of tweaking, so it's -- what's the point.
I hope that eventually we'll be able to do that because it would make management a whole lot easier from just a LAN management perspective.
Reverse DNS is painful: you can't use wild cards. The way that you can do some things in IPv4 to make reverse setup really easy you just can't do in IPv6. Not to mention that it's a very error-prone operation. You can you use dig minus X to kind of short-circuit that and give you the proper formatting for what the reverse PTR is supposed to look like, but it's still difficult.
Windows XP is broken, but useful. It doesn't do DNS, but you can still run it on a dual stack network and it can get the IPv6 host if you have a IPv4 name server. We do that. It works.
Bugging vendors does work. This frankly surprised me. You hear constantly that vendors are ignoring you. And we've found them over the years to be much more receptive to this, and I think part of that is because we're ARIN and they want to be able to say, hey, we help support ARIN in their efforts to do IPv6 and do this and that maybe we get a little bit of extra attention. But vendors in general have been really receptive and it's been great.
Today into the future we're standing on dual stack everywhere. IPv6 is enabled by default. That includes push scripts and all this back office sort of stuff, some of the stuff that John mentioned. It's very difficult to do those kind of upgrades, and as we deploy new things and operations, we're relying on the fact that IPv6 is a default protocol.
That is actually difficult. It's hard to do that sort of stuff from parsing IPv6 addresses, having to throw away existing scripts to do things and rewrite them to do the right thing. In retrofitting old scripts to do the right thing when they failed because from their point of view IP addresses have changed and you get SSH warnings and things like that, that is a whole lot of work.
ARIN also -- we have an IPv6 requirement from vendors, whether it be a router or circuit or whatever. If we put out an RFP for something, we're going to ask for it to have IPv6-enabled. So that's a really important point as well, I think.
And that's really it. This is the state of IPv6 of ARIN. Thank you.
Patrick Gilmore: Patrick Gilmore from Akamai. You said 8 percent -- I know 99 percent is internal -- but 8 percent of your Web is IPv6 traffic, which really surprised me. Do you do things like the Google whitelist or do you -- DOS users who are broken that way?
Matt Ryanczak: We don't do the Google whitelist, no.
Patrick Gilmore: So there are people that are being DOSed in for that?
Matt Ryanczak: I'm not sure I know what you mean.
Patrick Gilmore: There are people that will ask for a quad A that have no IPv6 connectivity to --
Matt Ryanczak: Oh. Yeah, I mean, that potentially exists. I've gotten --
Patrick Gilmore: It's not a potential.
Matt Ryanczak: Well, what I'm saying, we don't get a whole lot of complaints about people not being able to get to our website.
Patrick Gilmore: Okay.
Matt Ryanczak: And we don't do the v6. or v6.arin. There's actually a quad A for v6.arin.net that points to our website, but www.arin.net also has a quad A.
We actually get very few complaints. Occasionally it does happen and we try to work with these people to figure out why, and we oftentimes we can get them connected to our website. So...
Patrick Gilmore: Thank you.
Kevin Oberman: Kevin Oberman, ESNet.
Just for the record, sometime two or three weeks ago there was an interval where I was unable to get to you by your quad A, but I will admit I was remiss and did not form a trouble report. I did the IPv4 address manually and got to ARIN fine. It was an odd outage because I could connect to ARIN over IPv6. Part of the Web page would open and then it would just freeze and never loaded anything else. I don't know what that was. It was weird. I should have reported it. Sorry.
Matt Ryanczak: No, that's okay. We had a really similar issue with the IETF website recently. You could get there, but not really. They are similar to us. They have a quad A for their main website. I think it goes back to the IPv6 Internet is just a little bit broken.
But I actually think that, at least in the case of ARIN where we don't have a lot of profit motive for our website -- you know, we don't make money by people really going to our website that much.
So it becomes a really good troubleshooting tool that people can use to test their IPv6 connectivity and they can report it to us and we'll actually work with them very hard to get them to our website over IPv6.
Mark Kosters: Okay. Thanks, Matt. So now we're going to kind of twist, move aside and say, okay, you've heard some success stories going on IPv6. And we're seeing a modicum of growth. And now what we're going to do is actually move into -- let's see what it takes to actually move enterprises into IPv6 and other installations and as well as applications.
And first up here is Owen DeLong. He's going to talk about what it takes to actually start porting these things from IPv4 to IPv6.
Owen DeLong: Morning. So those of you that were in the tutorial on Sunday, a lot of this is going to look familiar. It's kind of an abbreviated version. Why is this important? You've all seen this slide. The exhaustion number there doesn't match the app in your iPhone today because that slide was delivered a few days ago, and that number is a little out of date right now. But the graph to the right, the red line dwindling down to nothingness is the IANA IPv4 free pool, and that number is still relatively accurate. There's a lot of text in the slides coming up, apologies for that, but there isn't a graphical way to represent this.
Here's about what it takes to port your code. There's some sample code available at that URL. Feel free to go look at the code, download it, use it as you wish. There are examples there in C PERL and Python. I'm working on Ruby and Java to be put up there relatively soon.
I did some variable name changing early on so that I could go looking for the old variable names and identify the places in my code that probably need to get changed. Later as I got more adept at this I found it was more hassle to rename the variables than to just go looking for the calls, and I had a sort of checklist I developed of calls to look for. And then basically it becomes compile, repair, recompile, test, debug, retest.
The general changes. AF_INET changes to AF_INET6. Sockaddr_in changes to sockaddr_in6. And if you need to declare a chunk of storage that will hold either a sockaddr_in or a sockaddr_in6 structure, you can use that using a sockaddr storage declaration.
Same structure members, similar constants, mostly just the address size changes. No big surprises.
There is one thing that changes in IPv6 that can be an issue for you, if you care, and that is this new concept of address scoping, which allows you to distinguish between whether you want to talk to the link local address of an interface or a site local address or a global address, et cetera.
For the most part, if your IPv4 application has been running, you're not going to care about this. You're just going to want to use the defaults because, well, actually IPv4 didn't even have this concept.
Some possible gotchas that are not covered in these code examples is if you're storing IP addresses in log files or databases, or if you're parsing the IP address with routines that aren't part of the libraries that got upgraded to support IPv6, then you're going to need to update your parsing routines, obviously.
What happens if we're not ready with our applications? Well, your IPv4 clients are going to have a hard time talking -- your IPv6-only clients are going to have a hard time talking to your IPv4-only servers. No big surprises there.
And I'm only going to go into the PERL example here. If you care to follow along, look at the source code DIFs in the handout file that's on the Web. There's a link coming up.
Most of the variables didn't get renamed when I ported to PERL. You need to add the Socket6 module. You still need the Socket module, but Socket6 gives you the additional IPv6 constants and such.
All of the getby calls mostly get host by name, get server by name, et cetera, get replaced. You change the protocol and address families in the socket and bind calls and then there's minor changes to how you process incoming connections.
The biggest change is the conversion from getby whatever to getaddrinfo or getnameinfo. Changes are similar that are needed for C because the underlying libraries are the same.
And there is one gotcha in PERL that's particular to PERL and getaddrinfo, in that if you pass it in6addr_any, it doesn't return in6addr_any; it returns local host. If you subsequently pass that to your bind call, you're in for a rude awakening and you're only listening to local host and not to what you expected.
So this is a more realistic example of what happens if your application isn't ready. You get some combination of these three things where you've got multilayer NAT and/or Dual-Stack Lite or the top left is just sort of communications are broken in general, signified by sort of that red box.
Here's the actual code changes. Apologies for the eye chart nature of the stuff. The handout makes this a little more clear. It's a side-by-side DIF that's nice and readable and you can blow it up as big as you want on your screen. Getprotobyname and getservbyname get replaced by this sort of Swiss Army knife getaddrinfo call that does it all, and then you need to unpack it just a little differently.
The socket and bind calls PF_INET changes to PF_INET6, and there's not a lot else that really matters there in terms of changes.
The IPv4-only and dual stack versions, as you can see here, minor tweaks just to address the additional address family. And that's about it.
The client, very similar migration tactics and very similar set of changes. Socket6, getaddrinfo. Inet_ntoh changes to inet_ntop, and the reverse is true also, aton changes to pton.
And the Swiss Army knife getaddrinfo simplifies your name resolution process on the client side by quite a bit. And there is one key difference in IPv4-only; you can sort of create the socket and then iterate across the things returned by gethostbyname until one connects.
In IPv6 you need to move the socket creation and destruction inside of the loop.
So here's the IPv4 loop where the socket's already been created. And then in the IPv6 version of the loop you actually have to create the socket each time because it might be a PF_INET6 socket or PF_INET socket. So you don't know that ahead of time.
Better libraries could actually solve some of this. And the complexity, the additional complexity isn't as bad as it looks. It's mostly the result of moving code that was outside the loop to the interior of the loop.
Anybody remember this classic slide from the IPv4 soft landing? Apologize to David for stealing his slide without permission.
This is a handy function replacement reference that you can pull off of the slides on the website.
And there's a structure replacement guide also included. And with that there's my contact information. If anybody has questions or comments, I'll take them online or at the end of the session.
Bill Fenner: How about now? Bill Fenner, Arista Networks. Some time ago the default for a lot of kernels changed to accepting only IPv6 connections when you call, when you bind to a v6 in_addr, in_addr on a socket, have you found this to be the case? And it seems like what you've done is changed an IPv4-only server into an IPv6-only server.
Owen DeLong: On the systems that I worked on, it worked as a dual stack server. If your kernel binds your PF_INET6 socket to IPv6 only, that is a very unfortunate change to the kernel that will require substantially more change to code to support dual stack.
Bill Fenner: I agree it was very unfortunate. It was couched as a security problem at the time. I won't get into why. But that meant that a lot of people jumped on the idea, oh, let's improve security by turning this off and making it harder for IPv6 apps.
I think there's a socket option called something like IPv6 only that you can make sure is turned off in your app that can prevent this, and the default became true in these kernels.
Owen DeLong: I was using a relatively current Linux kernel and it was working fine for dual stack as published. If you can get with me offline on specifics on which kernels are broken that way, I'll try and find one of them to test with and post updates to the code.
Bill Fenner: Cool.
Mark Kosters: Thanks. Now we'll have Aaron Hughes coming up with a step-by-step example how to enable IPv6 on your network if you're still running IPv4.
Aaron Hughes: Good morning. This is a presentation for ISPs and how to implement IPv6 across your backbone. I gave a similar presentation at ARIN six months or so ago. And I've updated it since then. In that room it was interesting, there weren't a lot of comments or discussion. In this room, I'll either get beat up or you'll understand the intent.
So this audience knows this, we've said it a bunch of times already, that at this meeting successful implementation requires supporting policies, allocations/assignments, decision influencers, operators, architects, software writers, business, et cetera, et cetera, et cetera.
Remember your role. Attending and participating is not enough. We must influence our respective companies to make good decisions about the future viability of each of our respective companies. This includes the decision to implement IPv6.
Unlike everything else that you implement in your company, which usually has a strategy, revenue generation, affects P&L, goes through BisDev marketing driven by customer demand, et cetera.
V6 is all about survivability. And it costs money, so it's quite different to justify this for your businesses.
Obtaining IPv6, the objections you typically hear about IPv6 are IPv6 -- obtaining IPv6 space from my RIR is hard, my transit providers don't support it yet, there's no BGP multihoming, lack of general support, IPv6 is really hard to implement, and existing infrastructure doesn't support IPv6.
So certainly if your existing infrastructure doesn't support IPv6, that's a much harder question to deal with and you should start dealing with it right now. Because it's a question of when, not if.
This one I'm not going to go into. Getting an allocation is really easy. Goes something like: Dear RIR, I'm planning to assign IPv6 space to 200 customers in the coming five years. And you get a response that looks something like: Dear LIR, your request is approved. Looks something like that.
Okay. So that was easy.
Okay. So my provider doesn't support IPv6. Stop crying about it. There are so many people that support IPv6 right now at peering exchanges. We don't need to involve our transit providers if they're not providing IPv6 yet. A couple of IPv6 exchange options for free transit are Hurricane Electric and WV Fiber. I imagine there are others out there.
This was updated sometime last month, so you've got like 30 locations for HE and 23 locations for WV. If there are others out there that are providing free IPv6 over exchanges, please let everybody know. This is really valuable to people right now, because we need to create a nice, stable IPv6 Internet while we're adding sessions.
So where do we start? We start with our IX locations. We go to our IX providers and say: Dear IX provider, I have some IPv4 addresses, I'm on these ports; what's my IPv6 address?
This is a simple e-mail. You probably already have a IPv6 address assigned to you at every exchange you're at and they're going to send them back to you.
Make a list of your peering information. This is really important. Your company information, your peeringdb information, your AS, your AS-SET, your locations, your IPv4, IPv6 addresses, contact information, everything every other ISP or network wants to know about you.
Update the peeringdb. Please, for the love of God, let the rest of the world know that you have a IPv6 address. Update the records in your peeringdb so that others will know that you exist and have IPv6 addresses. And then check the little box. It says yes, now we have IPv6. You don't have to do anything with this.
I mean, I see this -- even yesterday we saw, generally speaking, all of the CDN providers said, yeah, we're ready to peer IPv6. We're not going to send you any traffic, but if we don't turn off peers, we can't ever send you any traffic. So we've got to start here.
So what's next? We've got a direct allocation. We've got IPv6 addresses for each of our IX locations. We've made a list of who we are. We've updated peeringdb.com. And this is the big disclaimer.
This document doesn't give you permission to go change your network. Whatever your company process is, please follow your own change process.
Okay. Configuring IPv6. So the idea being simply to locate existing IPv4 peering interfaces, enable IPv6 if you have to on your particular vendor, configure the IPv6 on the peering interface, and test it.
All right. So some of this is small writing and you may just have to review this deck and look at it in detail. But for Cisco you need to enable IPv6 unicast routing. Show the route to the interface. Look at the existing config for your IPv4. On Cisco you IPv6 enable on the interface, add the IPv6 address, and then see if you can exchange ICNP. So great. Within about a minute you'll be passing some IPv6 packets back to yourself.
And, by the way, this global IPv6 unicast routing, I've only ever seen it crash one router on one version of code. It's worked successfully on every version I've seen that supports it, so you don't have to fear that hitting enter to enable IPv6 unicast routing at least today.
So zooming in a little bit so you're not looking at the little black-and-white text. On Cisco it's IPv6 unicast-routing. On Juniper it's enabled by default.
The interface config looks like that. All right. So reaching across the interface. Now that we have configured an interface, we can ping ourselves. Let's see if we can exchange some packets with the outside world.
Because IPv6 doesn't have ARP, it's a little bit harder to find hosts because IPv6 uses neighbor discovery. So the really easy way to find another host on the peering exchange is simply look at the peeringdb. Pull up the record for the exchange you're at, find an address and ping, send some packets.
All right. And so just to follow up with Juniper, find the interface. Instead you're doing a show routes. Look at the config for IPv4. And the only real difference in the configuring IPv4 is that you're changing the word INET to INET6 and the address is the same.
One thing you'll see consistently throughout this deck is IPv4 and IPv6 are configured almost exactly the same way; you're just changing one word or something minor to 6.
So same thing with Juniper: sends packets and you're done. So this is the idea. You just dual stack each of your IX interfaces between your backbone and your IX. Step one.
Now we need to go into the backbone. So keeping track of your peering interface addresses is one thing, but keeping track of internal assignments is an entirely different thing. You're obviously not going to put this into a signature.
So if you have the resources to do it right now at the beginning, it's time to write a tool. I have asked an amazing number of networks how they manage IPv4 space assignments, and the answer is generally a spreadsheet, which really scares the crap out of me.
You're going to need to use a database and get out of the spreadsheet mentality if you have it today. If you are in the position of not being able to write tools in an easy way or put these in a database, the easiest way to start at least is to use a reverse DNS zone file. So, again, if you have a tool, write one.
This is what it looks like from a zoomed-out view. We'll get into what this is. DNS zones work great to do this to start.
Numbering plans are quite different with IPv6, but we're not going to cover that in this deck. We will start with something simple that says: In order to get your backbone dual stacked, you get a /32, if you cut the first /48 on the top for yourself -- and please remember to pick the top one and not the bottom one, or you're going to get really tired of typing FFFF:FFFF -- cut the first /64 off the top for your loopbacks, and then the second /64 and going forward are going to start your link count.
So architecture. This is one of the interesting things or great things about IPv6 is, of course, it's an entirely different stack. So at this moment you actually can decide to fundamentally change the way your architecture is laid out in your network.
It can be entirely different than IPv4 architecture. Generally it's probably a good idea to make it the same as your IPv4 architecture, because this is a new protocol and you want to have the same expected output for the traffic as you do with IPv4.
But right now this is the time where you can actually make this decision if you're going to be talking about architecture changes anyway. Great time to do it.
All right. And here's the disclaimer so I don't get beaten at the microphones for architecture decisions.
Everybody's got an opinion about architecture. This is what I think is generally the most common flavor of architecture implemented in backbones today.
So this is going to be a basic network architecture of loopbacks and connected infrastructure injected into OSPF version 3. IBGP full mesh sourced off loopbacks, next-hop-self, all connected except for loopbacks into iBGP, and eBGP via route-maps and communities.
All right. So this is whatever your process is. Remember to follow it. In this case we're opening a mail client, open a DNS editor of some flavor, whatever application used to access routers, get on your desktop, e-mail your NOC, do whatever you've got to do to follow your configuration policy and go.
So numbering plan, again I said grab that first /64 and use it for loopbacks.
A really easy way to start off to get your loopback addresses are to just take that first -- I'm sorry, that fourth octet of your IPv4 address that's on your loopback already and assign it to your IPv6 address in that first /64.
So it's a little funky as you can see looking at it. So .247 will be 7.4.2.000 blah, blah, blah, blah, blah. But it's an easy way to configure it so you don't have to look back at this file. You'll just be able to look back at your existing loopback configuration and know based on the IPv4 address what the IPv6 address is.
So we'll start on one router. Pick one router connected to an IX, because you never want to build IPv6 on a router that's not connected to something that's not got IPv6 connectivity.
Some versions of Cisco OIS require that you type in IPv6 router OSPF and some PID to enable it. Most of them don't. Assume it's done automatically as soon as you configure it on the interface. But it looks something like this.
You type in IPv6 OSPF enable, IPv6 address of the loopback on each of the loopbacks, extend from the loopbacks to each of your connected interfaces between your routers on your backbone. And you're going to rinse repeat.
What it looks like from a higher level is something like this. We've got the IPv6 address configured on the IX interface. We start on a connected router on loopback 1. Connect to another router, configure the loopback, configure the connected interface with a /64. We've already done that. I'm going to turn up OSPF and see that the session is working.
From a slightly higher level it looks something like this. You want to make sure that you want to work from one location and work your way to where you're connected to IPv6 and extend OSPF.
Managing assignments with a DNS zone file. Already said this. For the first /64 for loopbacks, the second /64 is your first connected interface, and you just increment.
Drilling down a little closer it looks something like this. I know the colors are a little tough here, but effectively you're just taking one number and incrementing it.
And in this case I'm saying kind of like you treat /30s you're going to treat /64s as .1 and .2. If you want to pick something else, it's up to you. But this is nice and simple and I think fairly common in implementation today.
Do remember that if you're using incrementing systems of any flavor, what comes after 9 is no longer 10. It's A.
All right. So time passes. You're going to walk through your network. This will probably take you maybe three or four hours, if -- for a reasonable-sized network. It's actually not a lot of work. You just have to follow each of these interfaces, configure it turn off OSPF. You'll see all your /64s connected in OSPF and your /128s for your loopbacks appear. And that will be that.
It's not that difficult, but it's just tedious. Once that's done, OSPF will be enabled.
So at this point we've got IPv6 configured on all on exchange interfaces, loopback interfaces, all the connected interfaces, OSPFv3 on loopbacks, OSPFv3 on connected interfaces between routers. But I still can't reach anything on the outside world.
So what's next? We need to configure some peer groups. This makes life really easy. Take your existing IPv4 peer group and build a IPv6 version of it. Basically you're going to copy the config and paste it back in and add a -v6. You'll have to update your prefix lists and route-maps that match specific IPv4 resources.
You're going to need a peer group for your core router. So this is for your iBGP. Follow exactly what your IPv4 looks like, make a copy of it, add a -v6 on the end of it.
So the high level looks like this. OSPF is already on all these connected links. You've already configured them. IBGP will look like this. So it's loopback to loopback, next-hop-self, full iBGP mesh.
You'll have some common config that's on each of your routers, so you'll probably want to put it in one common location and build a common config that's to be pushed out.
IBGP is going to handle the connected interfaces, except for loopbacks. So you'll need a route-map for this. You've probably already got something that looks like this, a redist-connected that denies interface loopback0.
And IPv6, the syntax is a little different. And particularly on Cisco -- I don't know why they did this, but match IPv6 address match all, and then set of community. Whatever community you're using for loopbacks, keep it the same for IPv6. There's no reason to have a separate set of communities for IPv4 versus IPv6. In fact, it makes things much more complicated down the line.
So the basic BGP config looks something like this. Router BGP, your ASN. The only thing you have to type differently -- and this one's kind of a scary one. When you type in address-family IPv6, it reformats your entire config for BGP. Changes it from being one group of config to splitting it out to a small, common group of config, and then there's an address-family IPv4 and address-family IPv6.
The first time you see Rancid run after you type that, it will look scary, like all your IPv4 configuration changed as well. But don't worry, it's fine; it just moved a little.
You're going to create a peer group that looks something like this: Remote-AS, your AS, soft-reconfig in, update-source loopback0, send-community, next-hop-self, redist-connected route-map, redist-connected IPv6.
This looks exactly like your IPv4 does. Make a list of all your router loopbacks, whatever your /32 is. And note that because we used the first /64 of the first /48 in our first -- in our /32 it's a simple list, right, because you can go -- oh, that's supposed to be colon colon, by the way, not a single colon.
Convert the internal neighbor statements to peer group statements. So now you've got your list of all your internal routers: peer group-CORE-v6.
Stuff them into a big common config file that's got your route-map in it, your router BGP, your internal peer group and all your neighbor statements.
So this is up to you. But I would say because it's a new technology and because you're doing this for the first time, I would stop right now and go ahead and redist this config to all your routers.
You can certainly wait until you've created all of your route-maps and all of your policies, but for the sake of simplicity let's just push this now.
I don't know what you use today: SSH, Telnet, copy-paste by hand, Rancid, some kind of config push tool. But go ahead and push this out, and be sure to remove the neighbor that's to yourself for each of the routers that you're doing this.
So, poof, all the BGP sessions magically come up internally. That screen shot is a little off because of course you won't see 1500 internal prefixes; you'll see something more like the number of routers plus a few.
So iBGP is pretty straightforward. Again, this is just copy your IPv4 configure, add a IPv6 on it and change anything that is a match specific to IP.
All right. So configuring peers. Again, we've done all this work and we still can't actually reach the outside world. We can see our iBGP just fine. We're going to need another peer group, so we make a peer group called PEERS-v6.
This is the first time we really have to change sanity prefix lists and route-maps, because internally, generally speaking, you don't filter your iBGP sessions.
We need a basic sanity list. Now today there are some well-known IPv6 bogon lists out there that are pretty nice. It's not as easy to find as IPv6.
Basic sanity is something like these two lines. You're probably going to do something a little more anal than this. But this is at a minimum what you need to do. It just sets the bit boundaries of what you're allowed to send out and receive, so you protect both the people you're sending prefixes to and yourself.
Please, please, please, if you're not doing this for IPv4 already, do this for IPv6: Deny sending out your connected peering exchange routes; prefixes to the rest of the world. Create a simple IPv6 prefix-list PEERINGPOINTS.
You can find the list on peeringdb, or you can use your connective interface and chop the /64 and just make this list.
Then you need to create a list of your prefixes. Hopefully this is just one IPv6 prefix-list. My company permit the /32 and create a route-map to apply outbound.
This should look almost exactly like your IPv4 except you're changing match IP to match IPv6 address prefix-list communities, exact same configuration for matching.
And then you're also going to need a route-map to apply inbound. Again, we're matching IP address prefix -- sorry, that should be IPv6 address prefix sanity-v6.
Set your local pref to whatever you set it for IPv4, set your community to whatever you set it for for IPv4. Again, no reason to change this. And turn up a peer.
This is the cool different thing. Unlike IPv4, with IPv6 you can send it out an e-mail that looks a lot more like this: I've completed the dual stack on my backbone. I'm ready to turn up IPv6 peering. I'd greatly appreciate turning up sessions at your common locations and also appreciate a full IPv6 BGP table.
This is different. Remember to attach your peering info that you created in the beginning.
Okay. So typically what you're going to receive from an HE or WV is, hey, the sessions have been configured and it's up.
I'm pretty sure that Hurricane Electric is promoting IPv6 and quite responsive to IPv6 peering requests.
So we need to configure our side of the sessions. Add router BGP, address-family IPv6. The neighbor statement looks just like IPv4 except it's got a IPv6 address in there. Remote-address, peer-group now PEERS-v6. A description if you like, and you're going to see in your log your neighbors come up.
This is where we need to make sure things look good. So we need to -- syntax has changed quite a bit in IOS where now you have to specify both the address family and then the type of unicast or multicast.
So now you're going to be typing show BGPv6 unicast sum to look at this peer. And in this case -- I think this was three weeks ago I wrote this deck, so there's 1793 prefixes we can see there. I think today it's something more like 2100 or 2200.
So, great, we see prefixes. Check to see what we're advertising to make sure we're advertising just our /32 and not something else. We can see in this slide we're showing that we're advertising 12 prefixes in one. That's great. Now we can finally reach the outside world. Do a trace route. Do some pings. Take a look at what it feels like from your router to get to the outside world.
Note that if you're on a router connected to an exchange interface and you trace route and you're sourcing off of the connected IX, it's not going to go anywhere. So hop one router inside such that it's sourcing off your /32 rather than that connected interface.
All right. Set up the other sessions and look at the table. You're going to see the table growth just like you would see in IPv4. Known for multiple next-hops, all the metrics should look good. The table is growing and looks reasonable. This is going to -- I don't know what I was going to say.
All right. So IPv6 peering. Just as easily as IPv4 peering, just the config is changing IPv4 to IPv6.
All right. So attaching the host to the network. Now we've got a functioning IPv6 network, so let's get some host online to play with. This is something that needs to be hopefully nonproduction or small segment of the office or a lab or development environment or your desktop. But do try and use something that is nonproduction for your first IPv6 test segment. And keep in mind that nobody's monitoring your IPv6 network yet. This is brand new and you need to make sure that it's certainly isolated.
So all you really need to do is find an interface that's already got a IPv4 address. Hopefully your lab. In this case we're using ns0 for a dev box.
Find the existing IPv4 route. Look at the existing IPv4 config and grab the next /64 from your DNS file that we created.
In this case I've arbitrarily added 1,000 -- which is really not a thousand since it's in hex -- to the IPv6 counter that I'm using. You really should have an implementation plan, but for your development, for your testing, the first time, this can entirely be arbitrary. And we can worry about regional aggregation and stuff like that later.
So add the IPv6 to the interface. IPv6 enable on the interface. IPv6 address, whatever the prefix is, ::1 or ::2 /64.
And then go back to your host, depending on what operating system you're on. Take a look at the IF config and you're going to see a IPv6 address there. Autoconf is enabled by default on most operating systems.
So this is all you really have to do to get your first host up. So, poof, you've got a IPv6 address.
Do a little testing. Ping sys. Do some traceroute6 from the host. And you'll see that you've got a host now connected to the IPv6 Internet.
So from a router's perspective, this is a little different. Again, there's no longer ARP, so you have to do things like -- you can certainly ping the same way, just ping and paste in the address.
There's a couple of versions of code out there that you have to type in ping space IPv6; not entirely sure why. But most of them you just type in ping and paste in the IP and you're good.
Looking at it from BGP's perspective, you show BGP IPv6 unicast and then the prefix. You'll see that it is now being injected via iBGP connected. And this will be the first prefix in the table that is not known via OSPF.
So if you've got a browser on that host, pull up ripe.net or some flavor of a IPv6 change, ipv6.google.com. If you look on the top right corner, you'll be able to see your address has changed. This is a great site to do it. I think ARIN should also do the same; it'd be nice to see if you're IPv4 or IPv6.
So quick note about SLAAC. As soon as you add the /64 to the interface, every host on that subnet that's got autoconf enabled by default is going to have a IPv6 address.
So this is really important to make sure that you understand if you're not addressing the security policy right at that moment, you need to make sure that that's a small environment, because every other host is going to get a IPv6 address as well.
In other words, don't mix your lab or your dev boxes with production boxes on the same subnet.
All right. So let's add some name service. Time to add DNS. Reverse. The MTPR record looks a little funny.
A trick for getting this address is if you just type host or nslookup and grab the forward IPv6 address right out of your F config and paste it in there, a host of name servers, while it will return host name not found, it will return it in the right format to be pasted right back into the zone file. So it's a neat way to grab the reversed address, otherwise it's kind of painful.
There are a few tools out there. I know BIND is working on a few or ISC is working on a few toolsets.
The forward record, you've got an NA address already; you just simply add in AAAA space and paste in the IPv6 address, RNDC reload and test.
So it's our first host on IPv6. So now if we go to the router and do ping and a zero, you can see that it results in the IPv6 address first and we're actually passing some packets to a host.
So the security note. This machine's now on a globally accessible IPv6 Internet with no filters in place, listening on every single port that there are daemons running on the box. Everything connected to that VLAN with autoconf enabled has an IPv6 address.
If you show IPv6 neighbors to view configured -- sorry. If you use show IPv6 neighbors to view configured hosts on that Internet to see if there's anything else there, if you have a security policy implemented for IPv4, you'll need to do it for IPv6 as well. IP tables to IPv6. There are IP6 tables, IPFW to IPv6, et cetera.
I'll speed this up a little bit, but you can review this deck a little later. What's next? We need to add more peering. This is the time when you send e-mails out to everybody. Anybody that's got a IPv6 address is very likely to peer with you at this point. I've found only a couple of companies out there that have already moved their IPv6 peering policy to the same requirements as the IPv4 peering policy.
But, generally speaking, it's open peering, front of a bunch of sessions. Policies are much more flexible. People will find you. And today at least every bit you move over IPv6 is a bit that you're going to save off of IPv4 and it's free.
So IPv6 for your customers. Just like your dev lab, you want to find a customer that's known, friendly, find someone who wants IPv6. I'm sure you've got a request in the queue somewhere, and let's config it. Or let's build a config for it.
Route-maps, again, this is the exact same thing. You just look for things that are match IP and change them basically to match IPv6. This is a IPv4 version. This is the IPv6 version on the next slide.
It's pretty easy to go through your routers and find these dependencies on match IP.
We need to build a route-map for customer in, customer out. Again, we're changing match IP to match IPv6 address, prefix list is exactly the same.
On the neighbor statements, we're creating a customer full to customer full IPv4. And notice again I'm just taking my exact same config that was customer route-map, route-map customer in, route-map customer out, and change it to route-map customer in dash IPv6, et cetera.
Configuration statement on BGP looks exactly the same. It's the neighbor address. We're just typing in address-family IPv4 versus address-family IPv6 and peer name is just added to IPv6.
Once you do this your customer will come up. I did show an example here where the customer is multihomed. We've assigned them a /48. They're announcing to another provider. This is quite common in the table. And if your -- your customer will ask questions about this if they're IPv6 savvy. Because they will have read ARIN policy and say: Well, do I need two /48s from each of my providers.
Most people are allowing dual stacking and ignoring the current policy in place with ARIN except for a few providers. So you can do this with customers.
All right. Again, check for sanity, check to see what you're advertising, check to see what you're receiving.
Other ways to dual stack customers. For connected customers, you're going to need to just add a /64. For static customers you use a /64 for the connected interface and then static is /64 or /48 to the other side of the interface. And you'll need a route-map for reused static.
The point of this deck is that step one is easy. And you'll find this from the majority of people when they say there's no IPv6 content out there. The reason is because people are in the process of step one, which is getting their backbone dual stacked and connected to the IPv6 Internet and getting more peering.
But the slides I just went through will get you through step one. Step two is the hard part. That's addressing your security policy. That's getting your staff trained. That's dual stacking your NOCs. You know, all of the software updates that need to be made, the updates to tools. Working with departments on education.
This stuff is going to be the painful process of educating the rest of your company and your customers and your processes. But that first step really isn't very hard.
So the conclusion is obtaining IPv6 is easy. IPv6 BGP multihoming works. IPv6 is generally well supported on routers. Transit providers are not needed to implement IPv6. There are lots and lots of IPv6 peers out there to connect with. Dual stacking does not -- dual stacking the backbone will not impact your edges until you are ready to do it.
Implementation is not very hard. And the config is mostly intuitive and the same as IPv4. Questions?
Leo Bicknell: Leo Bicknell, ISC, ARIN-AC.
Thanks for a great presentation. I think that was an excellent summary. And I have just a couple of small items I wanted to bring up both to help you improve the presentation in the future and to let folks know a couple of things.
You mentioned the BGP change on Cisco IOS the way it reformats things. Some folks who do multicast -- which I realize is probably two people in the room -- have already experienced this. However, to reduce the fear factor, there is a command, if you go under router BGP called BGP space upgrade dash CLI, that will take your existing config and put it in the new one with no other changes.
So you don't have to go paste in some new multicast or IPv6 thing to see the change; you can get an exact one-for-one and check it if you're really paranoid.
The other thing I wanted to mention is the presentation was very heavy on Cisco IOS classic sort of commands.
One of the wonderful features of IOS XR and JUNOS and I believe some other platforms that I don't have experience with is you can actually do common IPv4 and IPv6 policy. You can put both matches in the same route-map policy or whatever and not have to duplicate everything.
And folks really should pay attention to that as they're selecting new equipment and thinking about their management overhead going forward.
And lastly I wanted to offer up I worked with a company that produces BIND, so we're a little familiar with how it works, and we find that with IPv6 reverse, in particular, a lot of people forget that the dollar origin command exists.
This is a command to set yourself somewhere further down in a DNS tree. And in particular you mentioned configuring a server and you showed an autoconf example.
We find with most of our servers for a variety of reasons it's easier to configure static IPv6 addresses just like we do in IPv4 than to use autoconf. If you swap out a NIC, your reverse DNS doesn't change and all sorts of other good things.
And we also find for humans it's much easier to make the IPv6 address the binary-coded hexadecimal of the IPv4 address, which is a really weird thing. But if your IP address ends in .212, your IPv6 address ends in ::212.
The nice thing about this is the combination of dollar origin and making them the same means you can copy your reverse file, creatively use dollar origin, add a couple of dots and all of your servers magically have the same reverse DNS. So that can be a huge time saver in getting things configured.
But all in all a very excellent presentation.
Aaron Hughes: Thanks. And I couldn't agree more with you. This is really just about making sure people understand how easy that step one is. What you're talking about is starting into the details of step two, and that gets into server management and DNS management, allocations and assignments and structure, what you absolutely should research. And that's really great advice.
But this stops at I want to get a dev host up, not I want to get a production host up.
Leo Bicknell: Also wanted to improve the presentation a bit. Because of other config default on Internet exchange, you would like to suppress your router advertisements, which are default on all Cisco routers. So you might want to add that to your presentation.
Aaron Hughes: Will do.
Cathy Aronson: Hi. Cathy Aronson, ARIN Advisory Council. Owen and Aaron, this sounds like the beginning of a wonderful best common practices document.
Mark Kosters: I see there's no one further at the mic, which is great because we're actually 10 minutes behind. So I think this concludes the panel. And if you have any further questions, please catch us afterwards. Thank you.
Dave Meyer: I want to thank the panel. So next is Betty with some results for us.
Betty Burke: There's the election results. It was very encouraging to see the total number of individuals go up slightly, but the votes cast were greatly spread among the number of candidates that we had, which were all very, very great this time.
Steve Feldman, Sylvia and Duane are the new SC candidate members, and all of the amendments passed.
Dave Meyer: So here you go. NANOG48 in Austin, Data Foundry and Giganews are sponsors, so see you there. But before we close out the NANOG part of this, I wanted to first thank our hosts, ARIN -- I'm sorry -- our hosts Arbor and Merit, which let's give them a round of applause.
The joint meeting with ARIN was fantastic; it was great being part of it. Let's give the ARIN people a round of applause, all the NANOG folks.
All of our other sponsors and speakers, we had a great agenda. It really went well.
And then, finally, all of you for coming and participating. You guys all deserve a round of applause too, so thank you.
One final thing I want to do before we close this all off is that, as Betty mentioned, we have new SC members. But we're also at the first point since the kind of restructuring of NANOG where people have served on the PC or SC and are reaching term limits, and those people are Ren Provo, Todd Underwood, Josh Snowhorn and Joel Jaeggli.
So they have contributed immensely to the program, to the structure, to the viability and to the quality of NANOG, and I think they deserve a round of applause as well.
So thank you all for coming and have a great ARIN and we'll see you all in Austin.
Betty Burke: One last comment. Please take a moment and complete your NANOG survey. It will be open for another week. Thanks.
(Recess taken 10:30 a.m.)