Planetlab平台介绍

Document Sample
Planetlab平台介绍 Powered By Docstoc
					Introduction of PlanetLab and
  Possible strengths for iVCE


         2006.3.19
主要内容
   PlanetLab概述

   PlanetLab 与Globus比较

   关于课题开展的一点考虑(讨论)
PlanetLab概述
   PlanetLab:大规模互联网服务的测试床
      an open community testbed for Planetary-Scale

       Services




    当前: 655 nodes over 311 sites(2006.1.17)
     Universities, labs, Internet2, colo centers
PlanetLab概述: 产生背景

   越来越多的服务(或应用)架构在Internet上广
    泛分布的大量结点上
       CDNs , Peer-to-peer, ...


   产生了很多分布式数据结构、服务和系统
       (DHT)Distributable hash tables to provide scalable
        translation
       Distributed storage, caching, instrumentation, mapping, ...
    互联网上的服务




   大量分布结点协作提供功能: Overlay
       如分布式存储服务
         大量互联网服务
   Each service needs an overlay covering
    many points
        logically isolated
   Many concurrent services and applications
        must be able to slice nodes => VM per
         service
        service has a slice across large subset
   The next internet will be created as an
    overlay on the current one
        it will be defined by its services, not its
         transport
        translation, storage, caching, event
         notification, management                      There is NO vehicle to try
                                                       out the next n great ideas
                                                       in this area
发展过程
   2002.3,Larry Peterson (Princeton) and David Culler
    (UC Berkeley and Intel Research) 发起成立PlanetLab
       初始成员:30 researchers from MIT, Washington, Rice,
        Berkeley, Princeton, Columbia, Duke, CMU, and Utah, etc.
       Intel资助
   2002.6,PlanetLab 0.5开发完成, 2002.10初始部署完
    成(100 nodes at 42 sites)
   2003.9,NSF 资助450万美元
   2004.5,欧洲PlanetLab 会议
   2004.12,CERNET加入(Intel和HP资助)
   2005.9,结点数目超过600个
    Why PlanetLab interesting?
1. Open, large-scale testbed for P2P applications or Grid
   services
2. Solves a similar problem to Grids/Globus: building virtual
   organizations (or resource federations)
   Grids: testbeds (deployments of hardware and software)
     to solve computational problems.
   PlanetLab: testbed to play with new distributed
     applications
Main problem for both: enable resource sharing across
  multiple administrative domains
PlanetLab的目标
   Research testbed
        run fixed-scope experiments
        large set of geographically distributed machines
        diverse & realistic network conditions
   Deployment platform for novel services
        run continuously
        develop a user community that provides realistic workload

            design                                deploy

                             measure
   Catalyze the evolution of the Internet into a service-oriented
    architecture
PlanetLab的设计:折衷
   “Mirror of Dreams” project
   K.I.S.S.
       Building Blocks, not solutions
       no big standards, OGSA-like, meta-hyper-
        supercomputer
   Compromise
       A basic working testbed in the hand is much better than
        “exactly my way” in the bush
   “just give me a bunch of (virtual) machines
    spread around the planet,.. I’ll take it from there”
   small distributed arch team, builders
Architectural principles
   Distributed virtualization
       Slices as fundamental resource unit
       Distributed Resource Control
   Unbundled Management
   Application-Centric Interfaces
   Self-obsolescence
       everything we build should eventually be replaced by the
        community
       initial centralized services only bootstrap distributed ones
                Distributed virtualization:
                VM and slice
       PlanetLab中每个结点可以是普通结
        点,也可以是Cluster等高性能机器
       每个Planetlab结点有一个或多个虚
        拟机 (VM)
               each created and managed by the
                Node
           不同结点上的虚拟机可联
            合形成slice
                Slice is the key
                 architectural element of
                 PlanetLab
          Slice介绍(1)
          Each service or application runs in a slice of PlanetLab
                    distributed set of resources (network of VM)
                    allows services to run continuously
                    slice = set of virtual machines (VM), running in a combined mode
                    isolate slices from each other and minimize the effect one slice
                     can have on another.
          VM monitor (VMM) on each node enforces slices
                    limits fraction of node resources consumed
                    limits portion of name spaces consumed




                                                                            Slice K
Slice K




                            Slice K




                                                Slice K




                                                                                      Slice L
           Slice L




                                      Slice L




                                                          Slice L


VM         VM               VM        VM        VM        VM                VM        VM

                                                                    …
   VMM                        VMM                 VMM                           VMM
    Slice介绍(2)
   slice is scalable
       A slice be composed of one virtual machine, or all the
        virtual machines in PlanetLab
       It depends on system resources and the needs of the
        service (or application)
   Distributed Resource Control
       service producers (researchers)
           decide how their services are deployed over available
            nodes
       service consumers (users)
           decide what services run on their nodes
    Problems
   “Slice-ability” – multiple experimental services deployed over
    many nodes
        Isolation & Resource Containment
        Proportional Scheduling
        Scalability
   Security & Integrity - remotely accessed and fully exposed
        Authentication / Key Infrastructure proven, if only systems were bug free
        Build secure scalable platform for distributed services
        Sandbox技术
   Management
        Resource Discovery, Provisioning, Overlay->IP
        Create management services (not people) and environment for innovation in
         management
   Building Blocks and Primitives
        Ubiquitous overlays
   …
PlanetLab上开展的工作
   Network measurement
   Application-level multicast
   Distributed Hash Tables
   Distributed Storage(storage grid)
   Resource Allocation
   Management and Monitoring
   Distributed Query Processing
   Virtualisation and Isolation
   Testbed Federation
   上百个project
10. Virtualisation and Isolation
   Xen (Cambridge)
   Denali (UWash)
   Vservers (Intel Berkeley)
   Mgmt VMs (Intel SSL)
   SILK/Scout (Princeton)
   DSlice (Intel Berkeley)
主要内容
   PlanetLab概述

   PlanetLab 与Globus比较

   关于课题开展的一点考虑(讨论)
PlanetLab与globus比较
   环境假设不同
     user communities
     applications,
     resources


   建立VO的机制不同
    Assumptions: User communities
 PlanetLab: users are CS scientists that
  experiment with and deploy infrastructure
  services.
 Globus: users from a more diverse pool of
  scientists that are interested to run efficiently
  their (end-user) applications.
Implication: functionality offered
          User applications       User applications
                                    PL Services
              Globus
                                     PlanetLab
                                         .
                OS                      OS
        Assumptions: Application
        characteristics
Different view on geographical resource distribution:
   PlanetLab services: «distribution is a goal»          设计
       leverage multiple vantage points for network measurements, or
        to exploit uncorrelated failures in large sets of components
   Grid applications: «distribution is a fact of life»       现有
       resource distribution: a result of how the VO was assembled
        (due to administrative constraints).
   iVCE是不是可以考虑这两种应用?
Implication: mechanism design for resource allocation
    Assumptions: Resources
 PlanetLab mission as testbed for a new class of
  networked services allows for little HW/SW
  heterogeneity.
 Globus supports a large set architectures; sites

  with multiple security requirements
Implications: complexity, development speed
        Assumptions:                                                             PlanetLab




                                                     Control at VO level
        Resource ownership
                                                                                       Globus

Goal: individual sites
  retain control over their resources                                      Individual site autonomy
 PlanetLab limits the autonomy of
  individual sites in a number of ways:
       VO admins: Root access, Remote power button
       Sites: Limited choice of OS, security infrastructure
   Globus imposes fewer limits on site autonomy
       Requires fewer privileges (also can run in user space, )
PlanetLab emphasizes global coordination over local
   autonomy to a greater degree than Globus
Implications: ease to manage and evolve the testbed
        Building Virtual Organizations

   Individual node/site functionality
   Mechanisms at the aggregate level
       Security infrastructure
            Delegation mechanisms
       Resource allocation and scheduling
       Resource discovery, monitoring, and selection.
  Delegation mechanisms:
          Identity delegation
Broker/scheduler usage scenario:
 User A sends a job to a broker service

  which, in turn, submits it to a resource.
  The resource manager makes                     X.509/SSL

  authorization decisions based on the
                                              Broker
  identity that originated the job (A).          Delegated
                                                  identity
Globus                       PlanetLab
Implementation based         None
on delegated X.509
proxies
       Delegation mechanisms:
           Delegating rights to use resources
                                                                  Delegated
                                          Job descriptions +
 Broker/scheduler usage scenario:        Usage rights acquired
                                                                 usage rights


    User A acquires capabilities from                       Broker
                                                                  Resource
     various brokers then submits the                            usage rights
     job.
GGF/Globus                         PlanetLab
WS-Agreements protocols:            Individual nodes managers hand out
                                     capabilities: akin to time-limited
    To represent ‘contracts’
                                     reservations
     between providers and
                                    Capabilities can be traded
     consumers.
                                    Extra layer to: provide secure
    Local enforcement               transfer, prevent double spending,
     mechanism is not specified      offer external representation

                              可以互补!
        Global resource allocation and
        scheduling
Users
              Globus                             PlanetLab
Application
Managers


Brokers /
Agents

Node
Managers

Nodes
(Resources)
               • Identity delegation      • Resource usage delegation
               • Sends job descriptions   • Sends capabilities (leases)
                       Convergence:
    借鉴、融合              Large, Dynamic, Self-Configuring iVCE
                        Large scale
Functionality &        Local control, Self-organization
infrastructure         Infrastructures to support diverse
                       applications
                        Diversity in shared resources
       Grids(Globus)




                           P2P(PlanetLab)
                                   Scale & volatility
主要内容
   PlanetLab概述

   PlanetLab 与Globus比较

   关于课题开展的一点考虑
任务书:研究内容
   1.虚拟计算环境的资源抽象和执行抽象模型
   2.自主元素的参考模型、接口关系与交互协议描述
    规范
   3.基于虚拟执行体的虚拟计算环境运行时结构
   4.虚拟计算环境的基础服务规范及服务管理机制
   5.虚拟计算环境体系结构评估方法
两个模型、一个体系结构
合同具体考核指标
 发表论文22篇以上。其中每年在国际
  刊物与高水平国际会议发表论文1-2篇;
 学术著作1部;

 专利申请或软件著作权登记1-2项;

 培养博士生和硕士生10名以上
初步分工考虑(讨论稿)
   基本考虑
       以iVCE为核心,结合个人已有研究工作
       以合同为基础,不局限于合同
       模型+算法
   两个模型
       资源(自主元素)模型 (徐)
       虚拟执行体  ( 王、李d)
            模型,状态管理、运行时结构
   一个体系结构 (李d、储、张、李h)
       iVCE体系结构框架 (全体)
            初步:中科文章
       运行时环境,基础服务规范及服务管理机制
            聚合、协同机制 (徐/储、李d、张、李h)
       iVCE计算模型(系?)
   评估与测试
前两年发表论文任务(讨论稿)
   合同:发表论文22篇以上。其中每年在国际刊物与高水平国际会议发表
    论文1-2篇
   预计总数 30篇以上
   李d/徐/储: (2*9)
       国际刊物与高水平国际会议:>=1篇/年
       其它SCI文章或国内一级杂志: >= 2篇/年
   张/李h/    (2*3)
       第一年:SCI文章或一级杂志: >= 1篇
       第二年:国际刊物与高水平国际会议:>=1篇,其它: >= 1篇
   著作一本?,专利或软件1-2项

   其它老师同学相关论文:>=10篇
论文署名课题负责人排名应至少在前三,以课题负责人为第一作者重要论
 文应不少于1篇
每周交流计划
   每周四晚8:00
   地点:重点实验室973办公室(或一楼会议室等)
   交流原则
       充分共享
       原创者所有
       形式不拘
   交流内容
       相关领域技术最新进展
       个人研究及进展讨论
       困难与问题
人员壮大
   从五年考虑
   吸引其它研究生参加
       硕士生
       博士生
谢 谢!
思考中的问题
问题:
 1. 自主元素与资源服务关系?

 2. 虚拟执行体与服务关系?

				
DOCUMENT INFO
Shared By:
Categories:
Tags:
Stats:
views:4
posted:5/8/2012
language:
pages:37