Example: marketing

NVMe Client and Cloud Requirements, and Security

NVMe Client and CloudRequirements, and Security9:45 10:50 Flash Memory Summit 2017 Santa Clara, CA1 Features needed for SSD deployments at the clientGwendal GrignouLee PrewittSoftware Engineer, GooglePrinciple Program Manager, MicrosoftFeatures needed for large scale SSD deploymentsLee PrewittMonish ShahPrinciple Program Manager, MicrosoftHardware Engineer, GoogleSecurity Vision and Collaboration with TCGJ eremy WernerDave LandsmanVP SSD Marketing and Product Planning, ToshibaDirector Standards Group, Western Digital Features needed for SSD deployments at the clientGwendal Grignou, Software Engineer, GoogleCase for NVMe on Client Chromebook were not storage intensive Changing with Android application support (ARC++)

Case for NVMe on client § Storage usage is spiky § NVMelatencies will bring better customer experience 4

Information

Domain:

Source:

Link to this page:

Please notify us if you found a problem with this document:

Other abuse

Transcription of NVMe Client and Cloud Requirements, and Security

1 NVMe Client and CloudRequirements, and Security9:45 10:50 Flash Memory Summit 2017 Santa Clara, CA1 Features needed for SSD deployments at the clientGwendal GrignouLee PrewittSoftware Engineer, GooglePrinciple Program Manager, MicrosoftFeatures needed for large scale SSD deploymentsLee PrewittMonish ShahPrinciple Program Manager, MicrosoftHardware Engineer, GoogleSecurity Vision and Collaboration with TCGJ eremy WernerDave LandsmanVP SSD Marketing and Product Planning, ToshibaDirector Standards Group, Western Digital Features needed for SSD deployments at the clientGwendal Grignou, Software Engineer, GoogleCase for NVMe on Client Chromebook were not storage intensive Changing with Android application support (ARC++)

2 Considering model with large storage for NVMe on Client Storage usage is spiky NVMelatencies will bring better customer experience4 Packaging Connector viable only in some Chromebox Z height real estate on all chromebookmotherboard 16x20 BGA still too big BGA right BGA The same MLB can be stuffed with either NVMeor existing eMMC It provides a good transition from eMMCbased storage to NVMestorage. eMMCand NVMetogether provide good performance, price, feature, and capacitycoverage to meet different customers BGA 2 PCIelanes 2 x 8 GT/s: enough 512GB of regular flash Revisit with NextGenMemory SPI allow stacking SPI NOR Flash7 Cost Reduction Controller cost has to go down Host Memory Buffer (HMB) Support in Linux kernel has been proposed Other options8 Security .

3 Sanitize Sanitize improve Security When transitioning to developer modecrypto-erase if name space is supportedblock erase otherwise9kernel images / root partitions / other/dev/nvme0n1user partition/dev/nvme0n2 Conclusion NVMedevices in Chromebooks around the corner New usage model requires better storage Controller fixed cost is the limitation Coming on larger capacity first Proposed as SATA SSD replacement Replacing eMMCdown to 64GB considered10 Client & Mobile Needs for NVMeLee PrewittPrincipal Program Manager -SFSF lash Memory Summit 2017 Santa Clara, CA11 Agenda Why is Client different? What NVMe features are required?Flash Memory Summit 2017 Santa Clara, CA12 Why is the Client different?Flash Memory Summit 2017 Santa Clara, CA13 Design Principles For Client HardwareFlash Memory Summit 2017 Santa Clara, CA14 Support a broad set of hardware in smaller and smaller form factorsThin and light laptops (2 in1), Phone, Tablet, others Reduce BOM cost through integrationMultiple chips with different functions can be eliminated Harden device to Security threatsMalware attacks, DOS attacks, etc.

4 Flexible tradeoff between power usage and performancePower constraints, thermal constraints, thermal events, race to sleepWhat NVMe features are required?Flash Memory Summit 2017 Santa Clara, CA15 NVMe Optional Features When are Optional features not Optional? Required by Windows HLK Needed for smooth interop with Windows16 Client & Mobile Features Boot Partitions RPMB Name Spaces HMB Drive Telemetry Power Management Write Protect (targeted for )17 Data Center Needs for NVMeLee PrewittPrincipal Program Manager -SFSF lash Memory Summit 2017 Santa Clara, CA18 Laura CaulfieldSenior Software Engineer -CSIA genda Why is the Data Center different? What NVMe features are required?Flash Memory Summit 2017 Santa Clara, CA19 Why is the Data Center different?

5 Flash Memory Summit 2017 Santa Clara, CA20 Design Principles For Cloud HardwareFlash Memory Summit 2017 Santa Clara, CA21 Support a broad set of applications on shared hardwareAzure (>600 services), Bing, Exchange, O365, others Scale requires vendor neutrality & supply chain diversity Azure operates in 38 regions globally, more than any other Cloud provider Rapid enablement of new NAND generationsNew NAND every n months, hours to precondition, hundreds of workloads Flexible enough for software to evolve faster than hardwareSSDs rated for 3-5 years, heavy process for FW update, software updated dailyWhat NVMe features are required?Flash Memory Summit 2017 Santa Clara, CA22 NVMe Optional Features When are Optional features not Optional?

6 Required by Data Center RFP Needed to meet DC use cases23 Flash Memory Summit 2017 Santa Clara, CAData Center Features Streams Fast Fail I/O Determinism Drive Telemetry24 Flash Memory Summit 2017 Santa Clara, CACloud Requirements for NVMeGoogle perspectiveMonish ShahFlash Memory Summit 2017 Santa Clara, CA25 Typical Cloud ApplicationFlash Memory Summit 2017 Santa Clara, CA26 InternetApplication running on globally distributed server farmsServing millions to 1B+ usersMinimizeTotal Cost of Ownership(TCO)MinimizeLatencyOpportunity #1: I/O DeterminismFlash Memory Summit 2017 Santa Clara, CA27 How I/O Determinism helps To optimize TCO, large SSDs are often shared between multiple applications I/O Determinism (IOD) helps control latency By solving the noisy neighbor problem Giving the host control over latencyFlash Memory Summit 2017 Santa Clara, CA28 Note: NVMeTechnical Working Group is actively working on I/O Determinism Technical ProposalRead / Write Interference Common element in all NVM technologies: Writes take much longer than reads Reads are blocked while write is active Causes long tail latencyFlash Memory Summit 2017 Santa Clara, CA29 Example.

7 NAND flashRead latency w/o blocking~100 sRead latencyw/ blocking behind a write~2-5 msIOD helps address this:IOD Concept: NVM Sets NVM Sets: Provides a mechanism to partition the SSD with performance isolation NVM Set creation mechanism: Vendor specific Host cannot select arbitrary size; vendor decides a specific set of configurations to supportFlash Memory Summit 2017 Santa Clara, CA30 Example table of NVM Set configsConfig#NVM Sets provided04 Sets of 1 TB each18 Sets of512 GB NVM Sets in a configneed not be of the same Sets: Performance Isolation Degree of performance isolation is an implementation choice. Recommendation: Do not allow writes in one NVM Set to block reads in another NVM Set To the extent possible, avoid sharing of internal controller resources PCIeBW is shared: no isolation is possibleFlash Memory Summit 2017 Santa Clara, CA31 Predictable Latency Mode Allows host to control read / write interference within a single NVM SetFlash Memory Summit 2017 Santa Clara, CA32timeDTWINNDWINDTWINNDWINDTWINDTWIN = Deterministic WindowNo writes to media.

8 No GC or other maintenance. Minimal writes from host. Minimal tail latency on = Non-Deterministic WindowWrites allowed. Host can write; GC and other maintenance allowed. Return to DTWIN when enhancement to NVM Sets, for more advanced applicationsRead Recovery Level (RRL) RRL: host can trade-off UBER for latencyFlash Memory Summit 2017 Santa Clara, CA33 RRLO/MVendor (normal recovery effort) Fail (minimal recovery effort)MandatoryOpportunity #2: Optimizing memory TCOF lash Memory Summit 2017 Santa Clara, CA343D NVMs: Skirting End of Moore s LawFlash Memory Summit 2017 Santa Clara, CA35 DRAMNAND and newNVMsPlanar technology, no prospect for monolithic3D scaling3D already proven for NAND, expected for PCM andReRAML imited prospects for cost and capacity scalingReasonableprospects scaling in the foreseeable futureSemiconductor geometry scaling is reaching its limit.

9 However, impact on different technologies is : Use NVMs to supplement : DRAM will have the best performance. Use NVMs for cold data .Implementation: Reinvent PagingFlash Memory Summit 2017 Santa Clara, CA36 PagingDRAMSwap DeviceCold Data(4KB pages)MediaLatencyHDD~10 msSSD~100 sNew NVM~10 sChoosing swap mediaGoogle Experimental ResultsPromising results with 10 s swap device: Negligible application performance hit when paging cold of media: PCM, ReRAM, Low Latency SLC NANDNVM eoptimization: Use Controller Memory Buffers (CMB)SummaryOpportunities for next gen Determinism: Control latency @ constant paging Reduce DRAM TCO @ constant performanceFlash Memory Summit 2017 Santa Clara, CA37 NVMe,TCG, and Security solutions for the NVMe ecosystemFlash Memory Summit 2017 Dave Landsman Western DigitalJeremy Werner ToshibaDavid Black Dell EMCA genda for today NVMe and TCG are working together on NVMe Security New features developed in Opal family Discussing enabling enterprise capabilities using same core spec as Opal family What threats are we trying to address?

10 (and not) What s being implemented in NVMeand TCG specs? What s next?NVMe device ecosystem has become broadNVMeDatacenter to NVMe-oF Non-PCIefabrics NVMe-MI Out-of-Band ManagementBroad ecosystem means Security considerations Authentication/Access Control Sanitize Data-at-Rest Encryption Device Locking Media Write Protection Data-in-Flight encryption E2E Cryptographic Integrity ChecksThreat ClassesMitigation StrategiesAbove +To l e f t + Data Access Theft or unauthorized disclosure of data Malicious or criminal change or destruction of data Physical Device Access Device Lost/Stolen Repurposing a devicePyrite SSC Block SID AuthOpalite SSC PSID Block SID AuthTCG/NVMe Work To DateCore Spec SetsOpal SSCO ptionalFeature SetsStorage Interface Interactions Spec


Related search queries