NASA Moderator: Anita Sohus January 13, 2004/1:00 p.m. CST Page 1 NASA January 13, 2004 1:00 p.m. CST Abstract: “First Look‖ on PBS Saturday Jan 17 from Houston Museum of Science; check local listings Houston Museum of Nature and Science and Rice University have some educational software you can demo Event Scope has some new updates The 3-D image was one image only, no full color panorama has been done yet The private site is http://muse.jpl.nasa.gov Your best options, in order, for current content are: o NASA TV briefings and content o http://marsrovers.jpl.nasa.gov the public site o http://muse.jpl.nasa.gov the museum’s site, for special info and if the traffic is slowing down the public site o http://photojournal.jpl.nasa.gov for the released images with captions Coordinator This is the coordinator. I’d like to inform all participants that today’s call is being recorded. Thank you, you may begin. Anita Hello, there. It’s Anita. I see Ericc is down the hall having a meeting with his visualization folks and Michelle is in a staff meeting, so are there any issues, problems that you guys need us to work on for you at the moment? Larry This is Larry Ciupik from Adler. We are interested about the 3-D model that you’ve been showing real-time animations and flyovers. Is that possible to get? Anita I have to admit, I have not seen that myself yet. It should be available on the marsrovers site or the photo journal site. Have you been able to find it there? Cathy This is Cathy Tyson at National Geographic, and I have a somewhat related question. We’re looking for a color image of the rover, a diagram with the parts labeled, and there’s not NASA Moderator: Anita Sohus January 13, 2004/1:00 p.m. CST Page 2 one that’s in any size that you can reprint in the museum; and I’ve looked all through JPL, and everywhere else. Is there one that’s big enough to download? Anita Yes, Cathy, I saw your e-mail on that. Sorry. Not everything has been transferred over from the old Mars Viz site to the new muse site, and we do have very high resolution from Dan Maas stills, and I will get those posted there; and then you can put the labels on yourself, probably, because there is also a PowerPoint presentation that should have been on the muse site a long time ago, and I think it’s going to get there today, and there’s a PowerPoint, which is exactly what you want. Cathy Yes, I was looking for that, but I couldn’t logon to the site in any way. How do you get into it? It wouldn’t let me— Barbara Cathy, this is Barbara. I checked with Lance, and it’s down, so that’s why you weren’t able to get in. I’m able to get in it as administrator, because I’m going into a different part. But I did check with him, and it is down, so I did ask him to bring it back up again. Cathy Thank you. Anita I want to be sure that people are going to the current site, which is muse.jpl.nasa.gov. Shawn This is Shawn Laawsch. That’s been down for a while, because I’ve tried to get in the last couple days, and have had no luck whatsoever. It keeps giving me page unavailable. M I went in this morning, and it worked for me. Barbara Did you guys leave any messages on the anomalies phone line? I’ve set up two lines, one of which is if you have a problem, please call. Shawn I haven’t put in a message there yet. Barbara If you folks do that, we check that everyday, and then we’ll be able to help you more efficiently. W Yes, don’t sit there and struggle. Either e-mail one of us or leave a message on the phone line. W Could you just read off that Web address again, too, please? Anita Yes, muse.jpl.nasa.gov. I think what a lot of people are doing is going to the signup form and trying to make that work, but that’s a dummy page. That’s a teaser page. The only thing NASA Moderator: Anita Sohus January 13, 2004/1:00 p.m. CST Page 3 that works on there is the form. So once we have your IP address registered, the muse site, that’s what you want to go to. Eric de Jong and Rich Pavlovski have joined us. M And in a few minutes, we might also have Steve Levoe, probably, and Shigeru Suzuki is here. Anita Great. We’re getting questions, Eric about the 3-D model that’s being shown, how people can get that. Is it on the Muse site or the public pages yet or NASA TV or? Eric None of the above. Actually, instead of saying what you can’t have, I’d like to say what you can have. We’re still working on actually trying to develop a 3-D model. All you saw was the barest shell of a 3-D model of one single image, out of 225 images that it would take to make an actual full 3-D model. So, until we actually figure out how to make a real 3-D model, and also if you turned it around 180 degrees, all you’d see were a lot of holes; and so it looks good for that little animation that was put together the other day in almost real-time, but there’s not a whole lot there. We just wanted to show off some of the highest res photo images and we thought it would be nice to do it in 3-D. What’s being worked is actually we’re trying to figure out two things that you may have noticed: One is Jim Bell and his team are working extremely hard on trying to get the color calibration done, along with the science, and so there still are some variations, as there are even on Earth, if you go from dawn to dusk, through high noon, you’ll probably see how the sky changes color, and how all the rocks look different out in the desert. And they do similarly on Mars. We are taking siestas as part of the day because we’re running a little warm, while we sit on the lander rather than sit on the ground, so while we’re taking a siesta it means we can’t do any imaging during that little siesta time. So that does limit the kind of daylight we see on Mars, and so we’re getting a little more variation than perhaps we will get once when we’re on the surface; and then we can try to concentrate for the same camera to be taking pretty much the same time of day. That’s what we desire, and then if we do that, the variation will only be dust and atmosphere kind of variation, which I hope is not that large. And if we take them close to the same Sols, it shouldn’t be that large, and so that’s one aspect. Then, the other aspect is looking at correlators that correlate one image to another. That, of course, assumes that we have both a left and a right eye. Then once we run the correlator, and figure out what the disparity is, and once we see that disparity, figuring out what the x, y, z does; and once you figure out what the x, y, z … are, then trying to build a model that then actually registers to the images, and all of that has been worked for exactly one image. NASA Moderator: Anita Sohus January 13, 2004/1:00 p.m. CST Page 4 Larry This is Larry from Chicago. This is a follow up on the answer to that question. That is, how are you making the full-color stereo images from the pan cam? Eric The answer is, we’re not. Up to date, we are not. We have seen no full-color stereo images. There have been none to date. Larry So when they are done, they will be on the imaging pipeline? Eric When they are done, they won’t be on the imaging pipeline. What will be on the imaging pipeline will be all of the images that went into making those. So every image that was used to make color stereo images, once we make the first one, which we haven’t made any yet so far, all of those images will be fed to you; so you’ll have the left and right eye and all of the filters that were used to make the color. So what we really need, of course, is as close as possible to a full color in your left eye and as close as possible to the full color in your right eye, and a left and right eye for a portion of a mosaic, however large a portion we make, for the full-color stereo. Long term, we’d love to make a 360 full-color stereo. Of course, that’s part of, I’m sure we’ll do that sooner or later; but we need at least three filters close to RGB or synthesizing it for both eyes, for both left and right eye for the mosaic for multiple tiers. Of course, Jim Bill did a wonderful description of the 14-odd filters, that they don’t actually match up between the two eyes, and all the work that has to be done; therefore, because they tried to maximize the amount of science you could get out of both filters, but that didn’t assume that the human labor was minimized at all. So there is a lot of human labor that has to go into, therefore, doing the balancing act. Both cameras have one filter that’s identical. So, therefore, when we’re making the black and white anaglyphs that we have sent out already, and we have created, that’s somewhat easier to do than when we have to actually balance the fact that the pictures aren’t identical, and aren’t identical to the red, green, and blue that the human eye would see. Is that a pretty good answer for the question? Larry Yes, thanks. Eric We’re all as anxious as you probably are at wanting to see a full-color stereo panorama of the entire region, so anyhow, everyone really wants to see that product. You will have all the images that we use to make it, so if you want to try your own, you’ll also be able to do that as well. I actually think that, at least if the released images go as they have gone so far – what’s that famous phrase, ―current performance is no guarantee of future earnings,‖ or something? But it certainly seems like the science team has been working very hard to create very nice products NASA Moderator: Anita Sohus January 13, 2004/1:00 p.m. CST Page 5 from the data they have downloaded; and while you’re fully capable of and dealing with all of the individual images, some of the mosaic products, I think, are easier for you to use to derive products from. The images were taken in mosaic together, and so you don’t have the registration problem. They’re already registered for you, and so I think some of the mosaics might be things that you can pan on, you can zoom on, you can do a lot with. The color is as good as we can make it, at this current time, and it’s pretty good. It’s probably well within the variation that Mars presents for us color-wise from day to day and hour to hour. So those products that are in the released image products of both the anaglyph and the color mosaics, I think are good ones to consider using for a lot of your displays and other things. So I don’t know how many of you have been working with those released images, but they are high resolution. Some of them are 25,000 pixels across, and 1,500 to 3,000 pixels high, so both for plasma displays, for video displays, and even for planetarium displays, that’s a lot of pixels you have for a starting point. I think that actually the two images today, if you use them and go back to look at the previous images, you will find an increase in the richness of what we’ve given you. That is the one that’s called the Horizon Hills, and the Horizon Hills is identical to the color cam we released yesterday, except that it’s annotated with directions to different features, and it gives estimated distances as well as angles to these, and names, provisional names, just nicknames we’re using to the different features. Then, there’s a down-looking one that Tim Parker presented during the press conference. It’s black and white. It shows a bigger part of the region, and a portion of the landing ellipse. It has yellow lines. I convinced them to make yellow instead of blue, so you could see it more easily, yellow lines going out to each of the features. So if you take that one, it’s a down-looking view, and then you take this color panorama that’s 25,000 pixels across with the names annotated on it, I think that will allow you to go back to all of the previous released images, both the ones that were sent out just straight on the FEI-direct or the released processed images, and now know where they are. So you can see the feature, and you can see which way the rover is facing, and all of that. There’s one down-looking of the rover, too, that has a yellow arrow on it that was from two days ago, so you can know that the rover is basically facing towards the south; and if you put that rover image into that and take those three, I think it gives you a good context. Anita Could I prevail upon you – we’ve had some questions about, how do you figure out far away something is? Can you give an overview of that or? NASA Moderator: Anita Sohus January 13, 2004/1:00 p.m. CST Page 6 Eric Yes, I can comment on that if you like. So the question is, ―How are we figuring out how far away something is?” There are two real clues that allow us to do that. The first one is simple triangulation, and what I mean by that is, first we had to figure out where we were. Once you solve that, it’s actually easy to do this. But what we first did is, we run a line out to a feature and then we’d run a bunch of rays out; and if you assume you’re at one place, you run the rays out, and they intersect directly at a point in a feature. So then you can keep adjusting your assumption of where your origin is until the lines come out at approximately the right part of the feature, and the angle between them is the angle that you see in the images. So that’s first, so that gives you an origin. Once you have an origin, then there are two ways to determine the distance to a feature: The first one is, ask Mike Malin [imaging team for Mars Global Surveyor] to give you one of his orbital images with a known scale, like ten meters per pixel, for example, which is the one that was presented today. It was approximately ten meters per pixel, and then you can literally count pixels; and that’s very easy. That’s the easiest way, because Mike has done the hard work of giving you a nice scale down-looking ortho-rectified image. So that’s number one. Number two is a little harder way, but you can do with just the image collected from the rover and then get a left and right pair of images. So taken by the same panoramic camera at the same time of day or the NAV cam, either one, but taken on the same time of day with the same frequency; and then trace, simply look at, put up pulse images and measure the difference between any feature in one image, and in the x-direction, the … difference is in horizontal only. They’re lined up vertically, and then measure the distance, the difference between the left and right to the same feature, how many pixels apart they are. That’s called disparity. Now, especially, in images where the lander is a part of the picture, you can judge the scale by how far apart the portion of the rover is in the image, and then you can, therefore, personally judge how far apart the distant objects are. So that’s an alternative way of doing it. So either way works. The second way is stereo analysis, and the first one is just using a map, and I’d advise using a map when you can, and that will be giving you a down looking with pointers; it’s certainly the easier way. We’ve been working on trying to get some additional tools to help you. Obviously, our first priority is to get all the images to you, and I think that has been working pretty well, and we’re trying to fix any places where that doesn’t work. Actually, there are three things we’re trying to do: First, get the images out to you, both the released images and all of the full frames. I think, in general, it’s working extremely well, and Rich and Chris Cordell, but especially, Rich [Pavlovsky], are trying to fix that where that’s a problem. NASA Moderator: Anita Sohus January 13, 2004/1:00 p.m. CST Page 7 The second thing we’re trying to do is to create animations, and Shigeru [Suzuki] has been doing that for both press conference and other reasons, and then getting those out to you on a DVD. We’ve also been starting to give those images to Lance Watanabe [Mars webmaster] as well, so that he could do a quick time or compressed version on a separate site away from the images, so that it wouldn’t interfere with you getting the images, but you still could get those. But we’ve also been sending out DVDs -- and I guess four things. That’s in video format. The third thing is figuring out ways to give you HD format. Maybe I’ll let Shigeru talk about that for a minute. There are really two ways of getting HD to you that I think are viable. On the Web is really not a good way. It’s a huge download problem. If you guys had 1.5 gigabit per second connections to the Worldwide Web, and we had 100 times that bandwidth, then I guess we could do that, but that doesn’t seem to be where the Worldwide Web is today. So we’ve taken two ways, two approaches: One approach is to make an encoded MPEG encoded stream and then put that on a DVD and send that to you, so you can use that. That means you have to have a machine that you can load from the DVD onto a disk, and from that disk, you have to have a card that’s capable of playing that stream out, interpreting that stream, and then putting it out as a standard AC signal to one of your monitors or plasma displays or projectors. If you’re not very familiar with HD, it’s not likely you’re familiar with how to do that. So those DVDs that are labeled HD with MPEG encoded streams are only useful to people that have such systems. The second way within HD, is to perhaps create tapes. The trouble is, HD is great, because there are so many formats to choose from, and there are so many kinds of tapes; and really, unless you have a tape deck already, and in fact, two tape decks, so that you can deal with these, that’s probably not that useful to you. And these aren’t inexpensive tape decks to buy. They’re in the $80K kind of range. So that’s probably the least useful to most of you; although, there are a handful of you who might be able to use that, and might be able to then help others with it. So that one we would like to discuss offline with a handful of people, and maybe exchange some e-mail with those who will already have tape decks and are already working with HD and figuring out what we can do. We are using DVC Pro 100 as our tape deck of choice for this kind, that last way of dealing with it. But I think, in the interest of everyone else’s time, we should do that one offline and figure out how to do that. But if Shigeru has any comments about what I just said, feel free – or anybody else online has comments about that – please comment. Then I’ll go to the last topic, which are tools. Any comments on the HD or? NASA Moderator: Anita Sohus January 13, 2004/1:00 p.m. CST Page 8 Ryan This is Ryan Wyatt from American Museum of Natural History. We’re definitely interested in talking about HD footage, so we’d like to be involved in whatever offline discussions take place. Eric Okay, and we’d be happy to do that. Steve This is Steve Lee in Denver. I’ve been able to use the HD content from the DVD you sent out last week. My only editorial thing was we don’t have the capability to edit snippets out of the pieces, and so for our use, having each animation as an individual file would be preferable, rather than – I think there was one of them, the daily update that had had still images. And then there was an animation tacked on at the end, and that one we weren’t able to use, just from the way our system works, but individual files for each animation would be great. Eric Okay. Shigeru If you have individual automation in individual MPEG to file, you can set up the play list and playback … Steve Yes, exactly. But we have the capability on our system. We put up the full resolution still images through a separate input, and so that’s part of the play list as well; and so we can actually do a better version of the still images, than it would be just having them on the video. Eric Oh, good, and that makes sense and thank you for that feedback, and we’re glad that some people are making use of those. Thank you. Anita Yes, we can set up a separate teleconference for people with HD issues depending on Shigeru’s and Eric’s schedules. Eric Yes, I think that would probably work best for everyone, and that way, those people that aren’t dealing with HD issues won’t have to worry about the technical, gory details. That sounds good. Now, let’s get to tools. We really weren’t planning to give you that many tools in the first place; but we know also that this really is a flood of data, and we know that as the mission goes on, while there is still only in the less than 100, I think, in terms of the press released images, we’re going to end up getting thousands and thousands, maybe even hundreds of thousands of these full-frame images as time goes on. As we said in the press conference, there are 525 images in one full-color mosaic, and hopefully, we’ll take more than one of those as the mission goes on. So we were trying to develop, while we’re doing everything else, some minor little help, in terms of a tool to help you NASA Moderator: Anita Sohus January 13, 2004/1:00 p.m. CST Page 9 with both the full-frame images and/or with some of the mosaics, because you’re obviously going to need your own software. You can use Photo Shop. These things are JPEG full-frame, and the released images are TIFF and JPEG, and so you can use simple commercial off-the-shelf tools or other shareware tools to deal with them; but we realize also that perhaps it might be good to have a slide show kind of thing. But we don’t have a lot of time to develop that, and so, Steve in his not-copious spare time, has tried to make an effort at seeing if he can get something like that going. So I think what he can do is give you a little status on where that is. We do know that even to use these kinds of tools, you might have to have the right kind of environment. It’s not trivial. Obviously, if you just get a JPEG, whatever viewer you have can deal with it, including your Web browser; but if we want to do something more than that, tool wise, then the environment might have to be specific, and there might be some glitches involved in it. With that preamble, I’ll turn it over to Steve to mention kind of a status of where we are in this, and again, we’re working on it in between press conferences and the other things we talked about. Steve What I’m creating is a Java-based tool, which is a slide show viewer, and what you would be viewing is, we’re sending you JPEGs of each of the experiment data records, the EDRs. So those are relatively small images, and just by themselves, they might not have a lot of information about what each one is, and you might not necessarily have a good way of displaying them, although some people may have already developed a way. So what we did was, when we receive the images, we add a little bit of information about that image, as to when it was acquired, what instrument it is, and a few other little tidbits; and we create a scalable vector graphic, an SVG file, which is really the image with a little bit of a caption on that. So those are a part of the things that you’re receiving, along with the JPEGS; and they’re a pair, because the scalable vecotr graphics’ file, actually, refers to that JPEG, really a Web-ish HTML-ish type of thing. So I’ve been creating this slide-show viewer that I want to distribute, and I had it working perfectly here; but I’m working on the way to conveniently distribute it and I have not gotten that completely finished. So I was hoping to have it ready to announce today, and I just couldn’t get it done. But what we will do is on the Museum page, there will be a page for the SVG, and it will have some things that you can download for your particular machine, and then instructions on how to install it; and then also, there probably will be some other interesting SVG images and interesting stuff that will also be interesting to you. So that’s the state of affairs right now. NASA Moderator: Anita Sohus January 13, 2004/1:00 p.m. CST Page 10 Eric We kept adding -- by the way, that always happens here -- adding requirements to Steve, what we’d like him to do on this, and so this is an alpha test and we added the real-times, then we thought it would be nice if we could do something with released images and really, we kept incrementing the requirements a little bit. So right now, he has an alpha version that he’s trying to get out to you guys, but he doesn’t want to send out the instructions until we figure out across the platforms, how you can actually do it; because you have to have the right environment to make this thing work. That’s really, absolutely required. Java and some of these other things and SVG are supposed to work on all machines, but both of those keep changing the versions of JAVA and SVG, and so unless you have the right version of everything, things will not work across versions. So we’re trying to work those details as quickly as we can. And the captions material will be an automated caption. It won’t have a lot of information about it. That’s the one thing that I find is moving a little slower than the flow of image and things. I don’t just mean automated captions. I mean the caption material itself that has been flowing with the images. We’ve asked the science team to always include a caption, every time they make or release them. That seems to still be the hardest thing to get in the timeline. Admittedly, some of the images we’re getting five minutes before people go on air, but nonetheless, I think that’s lagging a little behind. I think it will help now that we have the landing site defined, and basically a compass heading direction, and will make it easier for people to have some reference images when they write captions. It has been difficult to talk about something when you see something on the horizon, and you can’t even give it a direction, and you don’t have a distance to it; it does limit what you can say about it. So I think that will help some. But I must confess, it’s still difficult to get those words put together at the last minute. We do have some people helping with that here, but I will confess that’s still a little thin, and that will improve over time. But as the calibration and other things come along, it’ll be easier for people, and as people actually discover more about the site, they’ll have more to say about it. So, I, again, have said this before, but I especially want to say it now, NASA TV is your best bet to listen to what the science team is saying on a daily basis. I think we may not have a press conference this weekend; but so far, everyday there has been a press conference, and NASA TV replays these as well. So you don’t just have to catch it at the time of the press conference. Anita And they’re web archived, as well. NASA Moderator: Anita Sohus January 13, 2004/1:00 p.m. CST Page 11 Eric Right, and so you can go back in time or catch today’s; and I think the commentary that is coming out there is probably as detailed as we have about the imagery that you’re seeing, and so I think it supplements a rather thinner caption material, so I would suggest if you want to find more, that’s a good way to do it. Ryan This is Ryan, in New York, again. I’m sorry, but the SVG files, those are currently being made available on the FEI or how are we going to get those files? Steve Yes, they’re currently available in the mera_ops_jpg_svg file Eric And, of course, since this is a new file type and does contain the information, and since this is an alliance, if anybody else wants to use this same information in scalable vector graphics form and come up with new tools that we all can share, I really encourage people to feel free to do that, and share it with the rest of our alliance here. Anita Some of them have been reusing IrFan – if I can find that on my e-mail. Ryan I’m sorry. I’m still a little confused, because I don’t see any of those files on the machine that we’re using to archive. So where are those? Steve It’s in a new file type. So using FEI, type ―show types,‖ it will list the file types available. Ryan Okay. Steve So if you have an automatic file puller set up to pull all the images from JPEG EFS, then you won’t be able to get — you just add that to your puller. Ryan Got it. Okay, so we’re not pulling those files over. Steve And you can pull them over in one big chunk by just saying, ―Get star.‖ Ryan Got it. Steve One other thing that is important to know is that the SVGs have to be in the same directory as the JPEGs. Eric Again, that’s because as Steve said, the SVGs have information about the images, but if you want to show them, it needs the reference image, the JPEG image. Steve If you go to Adobe, you can get a plug-in for most browsers for SVG, so they can be viewed inside of most browsers. The java tool that we’re going to be delivering is a way NASA Moderator: Anita Sohus January 13, 2004/1:00 p.m. CST Page 12 outside of a browser environment to give you a slide show that just plays on a continuous basis. Eric But in the interim, and also as an option, if you do go to Adobe and look at that plug-in for your browser, that will allow you to see this caption, this automated caption material, I believe; and so even before the java thing is available, you could actually look at that now and see some caption material and was previously not very easily available to you. Steve Also, along with the dot-SVG files, there is also a dot-TXT file in the FEI file type and the dot-TXT file is just the caption. It will have an identical name to whatever the EDR is. It will just have a different extension. Eric That came out of previous discussions we had here even before we were thinking about doing this, people had asked, could they just have the automated caption file by itself, and so we included that as well. Larry This is Larry from Chicago. Where is the Horizon Hills annotated picture? Eric It should already be released, because it came out in the press conference this morning, which was held at 9:00 a.m. Pacific Standard time, and so it should be in the released image set. Steve MER-A OPS PO release file type. Larry Can you repeat that, please? Steve It’s in the MER-A_OPS_PIO release file type. Eric For Sol 11. Larry Okay. Eric I think you’ll find that a very useful image. Larry Absolutely. Anita Have we fried your brains? Jamie This is Jamie in Colorado. I have a quick question for Eric. Are we still at site zero? The way we’re parsing the file name, we’re coming up with Site 2. Eric I’ll let Steve respond to that, because he’s the expert on the … NASA Moderator: Anita Sohus January 13, 2004/1:00 p.m. CST Page 13 Steve There are definitions of the site, so what it turns out, and I’ve talked to people that know exactly what’s going on, site zero is before they did any standup or anything. That was the first set of pictures. So Site 2 is once they stood the rover up enough to just take pictures, so almost everything you’re seeing is Site 2. Then there are a number of positions within that site, but you can pretty much mush those together. Jamie Do you have an example of – I think the last line we looked at we were at position 11, Site 2. What’s constituting a position change? Steve A position change, I’m not entirely certain exactly what the position change is. I think, when they retracted the airbags, and that caused the lander to shift slightly, that created a new position. Eric Yes, new positions are any significant change like the tilt; and also when they finally go on the deck, when they get on the ground, you can move several rocks and those are position changes. When you have to take a new NAV cam image, then you move to a new site. It’s admittedly a little bit arbitrary. M That’s fine, I’m just glad our file parser isn’t wrong. Eric Okay, good. I understand that. Anita Let me go back to the how you do the range finding real quick. A guy says, ―Does Spirit have a range finder?‖ No. Eric No. Anita All analysis, right? Eric We don’t have a laser range finder, and thank God; that’s correct. No, everything is through analysis through either stereo analysis or through map’s understanding, basically; feature identification, which that latter one boils down to triangulation. Anita I have not seen any directions yet of the schedule for TV, for the second landing. Have you guys got a rough overview for folks, so they can plan some events? Eric I don’t think they’ve announced it, but it’s on the 24th I believe around 8:30 p.m. Anita When the TV commentary would start? Eric Yes. Actually, they’ll probably start around 6:30 p.m. NASA Moderator: Anita Sohus January 13, 2004/1:00 p.m. CST Page 14 Anita Pacific. Eric Yes, Pacific Standard Time. Anita And landing is that still about 9:05? Eric Yes, and they will have some kind of warm-up discussions that happen on the day before the 24th, and that’s all tentative, also. But you can expect that there will be discussion, two levels of discussion, and it will happen the day before or two days before, no more than two days before, but probably one day before. And the two discussions that will happen are: A complete analysis of the entrym descent and landing that happened for Spirit, because we have all the analysis data that we haven’t really visualized or gone over. The other one is as they did with Gusev, they had a science team -- not just from our science team -- but also some external scientists, probably, who have studied Meridiani and they’ll talk about Meridiani, its history, and in general, the geology at as we know from orbit ahead of time. So they’ll discuss that, and it will probably be the day before, and that’s worth tuning into, so you get a context for the site. M In today’s press briefing, Rob Manning was talking about having a visualization next week, which would incorporate what they’d received from the telemetry on the Spirit landing. Do you know what form that’s going to be or is it like another Dan Maas animation that’s just synced-up or what? Eric It’s not another replay of a segment of the Dan Maas animation, which we’ve done a lot of those. That was a very good planning animation. Actually, Rob Manning and I, and three other members of the engineering team got together yesterday and the total planning was our hour discussion, yesterday; and I have one person on our team who is now starting to create that animation. So at this moment, I don’t know what view points we’ll have, but I’m going to work from both ends, i.e., the parachute deploy, it’ll start with on one end. On the other end, it will start with the last bounce to where we landed, because now I know where I’m landed. So we’re going to work from both ends and work our way to the middle, and see how much we can get done in a week of animation, while we’re supporting all the other press conferences. M That would be something that would extremely useful for – we’re doing another landing event for Opportunity and to be able to got through the reality of what happened to the last one would be, I think, a very big positive for the people there. Eric I’m glad to hear that because I think we’re going to put a fair amount of resources into trying to do that in the next week. And the benefit of this one is that the engineers have been NASA Moderator: Anita Sohus January 13, 2004/1:00 p.m. CST Page 15 spending a lot of time analyzing accelerometer and other sensor data they have; and they have a pretty good idea of exactly how high it bounced and where it went and all those kind of things. So this one will be based on the reality of all of those observations. M I assume this will be a high-def video? Eric Yes, we pretty much don’t do anything else. M Okay. If there’s any chance that we could have those by a week from Friday, that would be wonderful. But I understand the process. Eric That is when you should have them, because I think if we do them much later than that, they won’t be interesting to people. We may not be a lot before then, but that’s our target. M That’s fine. Thank you. Anita …, but that’s not – that would have to be mailed out? Do we do that? Eric Yes, we’ll put the DVD together and send it out. Anita Does anybody else have anything for Eric and Steve and Rich before they need to go? M Do we know when the next DVD with the high-def stuff will be sent out? Eric We’ll probably collect a bunch of stuff through Friday, and produce it over the weekend and send it out over the weekend. M Okay. Yesterday’s press conference, with going across the full-color pan, is that available yet? Eric Not on DVD yet, but that’s high priority. M Chomping at the bit, here. Eric Got you, understand. Carolyn We look forward to seeing you live on Saturday. This is Carolyn in Houston. Anita Eric is going to be TV star? Carolyn I think the whole crowd is. I’m not sure. I keep getting more and more people on the script. This is Geoff Haines-Stiles’ project. Yes, stay tuned; we are. NASA Moderator: Anita Sohus January 13, 2004/1:00 p.m. CST Page 16 Eric Thank you. We will. Carolyn It’s like herding calves. More and more stuff gets added and we have 20 or 30 kids talking. Anita, I was going to send you some pictures from last week’s trial run if you have a place for them. Anita Yes, just send them to me and we’ll work with Lance about where we can put them up on the Web site. Carolyn How big? Little ones? Web ones? Anita Send us the original; we can always change the size. Carolyn Okay. Anita Okay, anybody else? Peter This is Peter Coppin from EventScope at CMU. I just tuned in. EventScope is up with updates within it, so if people download EventScope from www.eventscope.org, there are updates that we’re updating once or twice a week in there. Also, we have created a level of detail Mars that we’re putting into Adler Planetarium and Science Museum, this week. So this is designed for an Elumen Vision Station , so if any of the museums have Elumen Vision Stations, please contact me at coppin@CMU.edu. Michelle This is Michelle from the Adler. We’re testing out different joystick-type properties to see which one might work best with the Elumen Station. So we haven’t quite yet found one that won’t break within 20 minutes of little children using it, so we’re working on that part. But if anybody has any suggestions, I know that some of the various track balls work well, so we’re still trying to narrow that down. But we’re looking forward to seeing the EventScope stuff. Peter The first version will use your track pad, so hopefully we won’t have that problem. Michelle I hope not. Carolyn We always use the ones you can buy in arcades, and they’re pretty stable. Eric They have to last under very, very hard use. Carolyn I know. Ours get killed by the hour. NASA Moderator: Anita Sohus January 13, 2004/1:00 p.m. CST Page 17 Peter This thing we’re sending to Adler, it’s a little bit different that EventScope. It’s like a bunch of existing Mars data sets you can zoom in on, and it slots in higher and higher resolution textures of using them on the surface. But next week, what we’ll have is a version of EventScope, which is also designed for the Elumen Station, and that has our weekly updates; so that would have the latest whatever data from the rover, but visualized in a virtual world. Anita Michelle, you guys have a Geo Wall at Adler. Michelle Yes, we do. Anita How is that going? Michelle Pretty well, so far. They’re having a little bit of trouble – the reason Larry was asking the questions earlier was, I think they were having a little bit of trouble, or at least they may have been anticipating trouble with the 3-D color stereo images, because the right and the left filters don’t match up quite right, so I think they were potentially anticipating what was coming down the road with that. But they were testing it out yesterday, and I thought they were going to try and install it all today, so it’s up and running. But I’m at home right now, so I don’t know what the follow-up was with all that. But so far, it’s going okay. They got the computer system in and got everything going. Anita Eric, are you guys working with our own GeoWall folks? Eric We have in the past … Carolyn We’re [Houston] working on some new software people can have if they want it. Anita Yes, you sent out an e-mail about that, right? Carolyn Yes, we have it piloting on several computers in the museum. I don’t want to send it out until I can guarantee a person can’t crash it. But we have a Mars TicTacToe, and a Mars update-software that can just sit in kiosk format, and we’re playing with a survey as well. Eric Great. Carolyn We’re playing with how to download, and we have the Windows’ version downloading okay; the Mac version is fussy. So we don’t want you to take them until another two or three days when we’re sure we’ve caught all the little silly things. I’ll e-mail Anita, and she can pass it on to everybody. NASA Moderator: Anita Sohus January 13, 2004/1:00 p.m. CST Page 18 Eric Thank you. Anita Carolyn, is that something we can post on the Museum site, or not? Carolyn I’m going to check with Pat; I think so. Anita Yes, you were having some cost recovery issues, if I recall. Carolyn Yes, let me see what she says, because we’ve worked pretty hard trying to make it something we can afford. Anita We really appreciate you guys who are doing things like that who are sharing them. That is really great for everybody. I will say that I sent a summary of where we are so far to Jim Stofan, the Informal Education Lead at Headquarters, last week, and he has passed that information up to the Administrator. So they know what we’re doing with you guys and what all of you are doing, so keep it up; great. Anybody else? Michelle This is Michelle, I have a sidebar question, and then a completely unrelated question. My first sidebar question is from a teacher who we had sent out the information about Schoolhouse Rocks to the Illinois Science Teachers Association List Serve, and a question came back as to where those images will be posted. Have they decided yet where they’re going to post the images of all the rocks that they get in Arizona? Anita Just off the top of my head, I’d go to the Arizona Mars Education site, but we’ll be seeing Sheri today, and we’ll ask her about it. Michelle The other question is who won the press pool with the number of bounces that the rover was supposed to take? That one came up today. Everybody was watching the press conference going, ―Who won the press pool?‖ So if you guys have any idea who that was, let us know. Eric I didn’t hear, but I’ll ask around. I don’t know the person. Anita We’re done, unless somebody has more questions. Eric The same time next week? Anita As long as it works for you guys. Carolyn Tune in Saturday at 2:00 p.m. Central. NASA Moderator: Anita Sohus January 13, 2004/1:00 p.m. CST Page 19 Anita Yes, Carolyn’s referring to the Geoff Haines-Stiles’ ―First Look‖ program, and they’re broadcasting from Houston at 2:00 p.m. on Saturday. Thanks, everybody.