Optimize GCP’s Security Architecture for Maximum Protection (Cloud Next ’19)

August 16, 2019 posted by


[MUSIC PLAYING] SHAHED LATIF: Welcome. Today, we’re going to talk
about optimizing GCP security architecture for
maximum protection. I just want to share with
you the agenda we’re going to cover the next 50 minutes. We’re going to talk about
key risks and threats from an enterprise
perspective just to give you a perspective
of what we feel is facing our clients today. We’re going to share
with you our view on what we think is a Google
Cloud security architecture model. And we’ll share with you
three dimensions to that. And then we thought it’d
be interesting to have a fireside chat. We’ve invited someone from
Google to join my colleague and I to go through
fireside chat to try and guess the questions
that are in your head today. Then we’re going to end on
a key takeaway slide, some of the key lessons that we want
to bring to today’s session. So with that,
today’s presenters– my name is Shahed Latif. I’m a partner in
PwC’s cloud security practice focused on Google. With me today will be joining
Gokul and Rob from Google. Fun fact about me– I’ve done a 80-mile
backpack in New Mexico, carrying 50 pounds over
10 days, very tough. I crossed three large mountains. And it really tested my stamina. I enjoyed it a lot. But Gokul, why don’t
you introduce yourself? GOKUL RAGHURAMAN: Sure. Good afternoon, everyone. I’m Gokul Raghuraman. I’m a director in our PwC’s
advisory group focused on cloud transformation
and cloud security. So fun fact– I always wanted to
skydive but never really had the guts to do it. So I thought, why don’t I
do it in a safe environment? And I picked this one. It turned out really fun. I really wanted
to go back again, but that’s something
that I wanted to do. Thanks. SHAHED LATIF: So let’s talk
about key risks and threats to the cloud. We thought we’d start
with a bit of a privacy and regulatory feel to it. And what we’ve done at PwC
is we’ve analyzed the market. We’ve looked at almost every
single country in the world. And we’ve identified
the number of drivers, environmental drivers,
international law, US law and regulations, enforcement and
case law, and self-regulation and standards. And you can see some of
the examples on here. And there’s many, many more. What we find with our
clients, when we talk to them, they really don’t
have enough insights because of the global nature of
the complexity of regulation. And sometimes it’s
important to understand it. So we have developed
a point of view in almost all these countries
and have it in a repository at our fingertips. We don’t need to spend hours
and weeks researching it. We have it at our fingertips. And we think that’s
an important attribute to begin the cloud journey
to really understand that, especially when you talk
about data localization and where data might
physically reside. So what’s the value,
though, of understanding all that regulation? What’s the impact if
you have non-compliance? We think there’s four broad
areas of non-compliance risks. The first is regulatory fines
and penalties, which I’m sure most of you are aware of. But you might be
subject to an FTC order. There’s been a number of
consumer breaches that have occurred that have
resulted in decades long or multiple decades
of orders where you have to do audits for the
next 20 years in some cases. Some business operations
might be limited. So it could have a big impact
from a regulatory perspective. Financial risk– you could
have loss of revenue. You could have litigation
costs, which is often the case in some breaches,
civil and criminal, and penalties for
data breaches as well. There’s also a
reputational risk. And that’s a big discussion. There’s so many breaches
today that have occurred. Some people have
been numb to it. Share prices don’t
really get impacted much. But does your
reputation get impacted? In some cases, it could,
especially if you’re heavily relying on an online
presence or having to depend on technology where
security is really paramount. You might even lose lots of
confidence in your employees and start to get turnover. And then, finally,
operational risk– restricting operations,
transferring data in an appropriate way
or having an instant response program, having sort of
remediation or obligations to the instant response
program that didn’t meet it. So these are some of
the non-compliance regs that you have to be careful of. I want to turn quickly to
some key considerations. We tried to summarize this in a
page, which is very difficult. But we think these are the main
things that we keep seeing over and over again at our clients. Governance and keeping pace– often, cybersecurity
is not always at the seat at the table. When it comes to
architectural discussions, IT may be at the front, or
business may be at the front. Rarely do I see sometimes cyber
is at a seat at the table. And with Google Cloud
adoption, you have to be. Collaborating on that solution
together to enhance enablement of agility through a
DevOp environment– and we’ll talk later about
our thoughts around secure development– is really important to
bring that mentality that cyber is an equal
partner in the discussion. And modernizing the
employee skill sets– time and time again,
we go to our clients, and we find there’s not
enough resources that know their subject. And we get called upon to
either augment the team or bring our specialized
skill set to the table. Identity access
management– you probably have attended some of the
sessions over the last two days around this topic. But when you’re in a
multi-cloud environment, it’s quite complex to deal with. And throw on top of that
legacy applications as well. So centralizing the identity,
both the user and device, using MFA and PAM, are three
really important topics to address. Data protection and compliance– I mentioned earlier all the
regulations that’s out there. Especially here, since
we’re all here in California today, the CCP Act is
having a big impact on a lot of my clients,
especially those with a lot of consumer data. They’re having to
understand now where is data, who has access to
it, and how do they use it. And if I have to delete
it, how can I delete it? So it’s really important to have
an effective program around it. Legacy services– what’s
the path for data migration from legacy apps to a workload
that’s going into the cloud? How do you deal with
data in transit? How do you encrypt
it the right way? And then preservation response– do you understand the
roles and responsibilities of the cloud provider? You might know that upfront
during early discussions. But as you get into month three,
month six, the second year, are you really clear on what
happens if there’s a breach? Who’s responsible for what? What are the SLAs around that
is really important to know. I want to quickly
talk now, jump into, something we’ve created
a PwC that’s based on three basic principles. We looked hard. And we have a concept called
strategy three execution when we deliver our
solutions to a client. So when it came to
Google Cloud, we said we wanted to approach
it in three dimensions– a technical architecture,
which is probably a lot of what you saw in this conference. But we also feel it strongly
on a process architecture and a controls architecture. All three have to come
together and have to be linked. And that’s what our view
is around, bringing it all together into one place. So I’m going to share with
you the next few slides how we’ve approached this topic. The first one is controls,
which is relatively easy. We’ve adopted NIST 8-53. We think that’s the most
comprehensive framework out there, especially in the US. It’s the most stringent
baseline to go after. We’ve coupled it,
obviously, with CIS version 7, because there’s a lot of the
technical hardening standards that you would have. We mapped them both together,
created specific controls for GCP. And we’ve identified
which ones are really critical and non-critical. And we also related which
ones are really technically automated, possible
through automation, which ones are really process
or workflow orientated, and which one would rely
on people to follow. We also took it
one step further. And we worked out which ones
are really native to within GCP and which ones are not. Often, our clients want to
maximize as much as possible from Google Cloud. And what we’ve done is try
to identify that early up. So having this
framework, we have a baseline of which to
measure our adoption against. So we think this is
one important attribute to have it defined. And you can see the
categories below. I won’t go through them all. You can see them in
present in front of you. But this illustrates the
dimensions of controls that you need to address. I want to spend a few
minutes on this slide, because this is our
technical architecture. Underneath this, we have
content that talks about what does gut like from an
architectural perspective. We don’t want to change
too much of what’s already in the on-premise world. But what we’ve done
is translate it. What does it mean to
being in the cloud? So let me walk you through
how we were address to this technical architecture. First, we use the NIST
CSF domains through identify, protect, detach,
respond, and recover. We feel that most of our
clients, especially the board level, understand
that terminology. So we use that lens to
translate an executive level. Here, there are different
campaigns and programs that we’re trying to
do using those domains. How do we measure success? We measure success
through that lens. We also want to take into
account the technology stack of the cloud environment. So there’s compute, network,
storage, log-in monitoring, IAM. We look at it from that
lens in making sure we address those components. At the top, we also want to be
aware of the security risks. So these are inputs to each
of our technical requirements that we defined– visibility, business awareness,
compliance, data security, threat protection, cloud
IAM, instant response, vendor landscape, cloud
DevOps, vendor manager, cloud API security and
infrastructure security. We think it’s important
that our model fully addresses those risks,
and they’re embedded in our technical architecture. So I’ll walk you through
them, and then I’m going to ask Gokul to share
some client experiences. So for example, IT
asset management– at a high level, everyone knows
that inventory is important. How do you do that
in the cloud when assets don’t last
more than maybe a few hours or a few weeks? How do I get around that? And if I have to go
back and investigate, how do I deal with that? Threat and vulnerability
management– clients are struggling
sometimes in space. How do you do a
penetration test in GCP? And how do you do this
on a regular basis? What tools are available
for you to allow for that? Data protection–
how do we make sure that the data that’s
there in the cloud is protected according
to your policies? Logging and monitoring, it’s
really important to define it. Most of my clients have a
multi-cloud environment. What’s the single
source of truth when it comes to the repository? How do we create
a data lake that will give us the
visibility that we need and in a real-time fashion? Instant response–
so if you ever do have a breach
of whatever nature, how do you respond quickly? Business continuity
and disaster recovery– how do we make sure that
you can recover quickly? We, ourselves, are
actually in the middle of researching some new ways
of looking at resiliency. So we’re looking at
actually leavening some of Google Tech, right? And how do we quickly
assess dependencies on different assets
that are in the cloud? Governance and compliance–
important point of our governance is who
owns cloud architecture. Who approves cloud architecture? When it comes to
service level approvals, where do you get that defined? And now, you get a blurring
between operations, security, and development. How do you bring
all that together in a defined way that
doesn’t bring down your levels of
security controls? Security development
lifecycle– you may be surprised to hear
that companies are struggling with adopting an SDLC that’s
applicable to a cloud-like environment. Risk assurance and
management– how do you assure your customers,
your third parties, that you got a safe environment? IAM, Identity and
Access Management– I’ve talked about already. But central theme– how you
control access into the cloud. Awareness and training and,
finally, network security– so I just wanted to spend
a little bit of time walking through each one. But I’m going to
ask Gokul now just to share some examples of
some of our client experiences as we’ve used this framework. GOKUL RAGHURAMAN: Sure. Sure, absolutely. So I’m going to take a
couple of domains here. And then we’ll talk about
specific customer examples where we’ve been asked
to come in and help, and that’s resulted
in our developing this kind of a framework to
help out in future engagements. So one of the first things
that we typically look at is what does a customer have
in their cloud environment. We recently were brought
in for a client who wanted to do a very quick
assessment on their environment and look at their readiness
of their environment. The first thing that
we ask them is, OK, what are you running
on the cloud? What are your assets? What are your resources? A lot of our clients have
an inventory available which they can use. But a lot of the times,
that’s not accurate, right? And in the cloud, with the
scalability and the elasticity to it, there are resources
going up and down all the time. So how are you going to have
a framework and a process in place to be able
to measure and track that over a consistent
period of time, right? So I think the key
on the Google side is GCP has already provided
some native capabilities, like labeling. They have the Security
Command Center that’s now integrated
with their asset lifecycle that you can leverage to enable
tagging on your resources, identifying what resources. And you build up a very
simple schematic model, which says, here is my resource. Here is what– its use. Who is the owner? What is the environment? What is the purpose of that? And you have some level of
data classification associated to that. So once you have
established that model, that then becomes your baseline. And you start slowly building
automation on top of that. So once you have
that automation, it’s very easy to start
building policies on top of that and you start enforcing
those policies. So the first thing
that we typically ask the customer is, what
is your asset inventory? And I would recommend, you
know, looking at your policies and your resource
management lifecycle to set up those labeling
policies in place. Another example
that we commonly run into when we walk into a
customer is traditionally you have these engineering
teams, the security team. Then you have the
app development team doing separate rules, right? As you go into the cloud with
all your automation in place, those lines have started
blurring all the more now, right? So you need to a proper
methodology and a process in place to be able
to say, how am I going to automate my infrastructure? How am I going to probation
my infrastructure? How do I embed security as part
of my development lifecycle, right? So somebody is
running containers, or somebody else is having
traditional workloads. How do I ensure a
consistent process that is baked in as part of
my build and release pipeline? So that’s another thing
that we commonly notice. And we recommend coming up with
a proper process in place that works not only on your
on-prem environment, as well as on your
cloud environment. And typically, once you have
established the process, that can be tailored very
quickly to any of the cloud providers that you have. So that’s a couple of
examples that I had in mind. So at this point
of time, I would like to invite Rob
from Google to join us for our quick fireside chat. ROB SADOWSKI: Thanks, Gokul. And thanks, Shahed. So my name’s Rob Sadowski. I work in the product
marketing organization here at Google Cloud
on security and trust. And my team really
develops a lot of the initial message when
we engage with customers around security,
helping to educate them about what are the
capabilities that are inherent in our platforms,
what are the different control objectives that we
can help them beat. How can we help them with
key security processes, like security monitoring,
like IAM, like, you know, the core disciplines of
any security program? And I guess in the spirit
of sharing fun facts, one of the things that I
like to do is see live music, see concerts. That concert was
great last night. Hopefully, everyone got
to catch some of that. I’ve seen a number of my
favorite bands hundreds of times. So that’s what I like to do
if I’m not doing security. So, cool. SHAHED LATIF: So I think
now, Rob, what we’ll do is turn to a fireside chat. We tried to think
about questions that people in the audience
might be thinking of. So Rob, why don’t you start,
and we’ll see where we go. ROB SADOWSKI: Yeah. I mean, I think the
interesting thing is that you have gone
through these implementations with many different
types of clients. And so early on in
those engagements, just when they’re
considering cloud or whether they’re
considering GCP, I mean, you talked
about technology. You talked about controls. You talked about process. Where are the biggest
challenges early on in the engagement or early
on in the process for people who may be either
starting that journey or who are very early
on in considering this? SHAHED LATIF: It’s actually
an interesting question, because often we get the
inside look before even Google gets to the door. So with my largest clients,
we get asked all the time, is this the right product? Is this the right solution? So before even a cloud provider
gets to the table, we’re asked, does it meet the
security requirements? And often, the ROI
is quickly addressed. That’s not an issue, because
that’s why we’re at the table. Is this the right
transformation process? Absolutely. But the fundamental
question that’s always the biggest lagger, is
it meeting my security policies? And often, what we find
is the client’s security policies are written in a
way that doesn’t translate to a Google-like environment. And so we’re having to translate
their requirements and then what they’re trying to do to
what the platform is going to bring them. And in some cases, yes,
they might quite be there. But 9 times out of 10, there’s
a translation required. And so what we have been asked
to do, more often than not, is go in and try and
understand that complexity and bring simplicity. Because that’s the other
side to the equation, bring the simplicity
and understand what the risks and
the vulnerabilities, but, more importantly,
the strengths that it brings as well. ROB SADOWSKI: Right. I mean, you talked about
basing your work on frameworks, like the NIST framework,
whether it does CSF or some of the control matrices. I mean, do you see people
basing their own policies around domains like that, or
do they tend to be looser? SHAHED LATIF: So it has varied. So I can give you a story
of a client, Fortune 15. So keep it within top 15,
I won’t tell you which one. I showed up three years
ago at this client. They created their
policies on input from legal, input
from operations, but proprietary built, was
built off their own thinking. They asked me to come in and
evaluate not that but more their posture overall. And we concluded one of the
most important thing was to do was to actually base it
on a recognized framework. There we used NIST CSF. And more often than
not, we’re finding NIST being the most common
framework to be used. ISO’s still relevant,
especially from a global side. But the NIST is the
most relevant one. ROB SADOWSKI: Yeah. And I think that’s one of the
reasons why, at Google Cloud, we try to do or we commit
to getting certifications among the most popular
international frameworks. So if someone is basing
their set of controls and how they meet policy
obligations based on something like ISO 27001 and then how
that gets translated through, there is often a direct
mapping that they can use to help them
understand within the confines of a shared responsibility model
what we’re going to be doing, what they’re going to be doing. And then we can also map
out to other controls there. SHAHED LATIF: Two other things
that I want to mention there. The other challenges with our
clients in the early adoption is, do I have the right
technical competency to it? Again, a big percentage
of our clients are building those
resources out. But they want to retool
their current staff to do it, which means they need
a leader and a change agent within the ranks to
be able to pull it through. And more often not, sometimes
they rely on a third party to fill in that gap sometimes– not always, but sometimes. But eventually,
they have to find that director or that
leader, whoever it is, to really drive
it to completion. ROB SADOWSKI: Do you find
that as organizations look to do some of this that
they are looking to actually bring existing controls? Like, they have a
control objective. But are they trying to
bring existing controls? Or are they relooking at
what is offered in the cloud? And I think we’re going
to talk about some of this to say, OK, we can
replace this existing control that we have in place. And it may be based on
a particular product of how they meet that obligation
with something cloud native. SHAHED LATIF: I
think it’s mentioned throughout this
conference, transformation. They want to automate. Automation is the
key to their success. They don’t want a
compliance audit. They don’t want controls. They want to automate,
which is a lot of what we’re going to talk about later. They want to automate the
process that’s involved in managing their environment. They don’t want
to have hands on. They don’t want people involved. They want to have machines
really do it for them. And some of the companies that
we’re dealing with the size that they do, they
don’t have a choice of the scale and the management
they have to deal with. Now, it sounds good. Is that possible today? I think it is. Is it done right now? No. Everyone’s trying to
get to that level. ROB SADOWSKI: Right. OK. SHAHED LATIF: So maybe I
can ask you a question, Rob, just in return. We’ve had a lot of
exciting announcements throughout the week. I’ve talked about some
of the challenges. But what are some
of the things that have taken your fancy
from what you’ve seen this week that really will
help clients out there today? ROB SADOWSKI: Well, it’s
a great segue, Shahed. Because I think you talked
about transformation and modernization of security
controls and capabilities. I mean, obviously, what
we announced with Anthos, where there is a lot
of security capability that is built in there– whether
it’s functionality like what’s in Istio. So you can create and
monitor microservices that you can ensure
that you have encryption between those
different services, that you can automate
configurations. It was a capability like
configuration management where you are able to
define certain policies, instantiate those
policies, and make sure those are
pushed out at scale and make sure things
don’t deviate. It’s critically, I think,
important especially as organizations begin to
scale to make sure that they’re adhering to policies,
that they’re following good governance and so forth. So I think that was one. I think the second– Gokul, you mentioned Cloud
Security Command Center. And I think that
visibility into risk is something that many
organizations really, really have been focused on. As they move into the
cloud, how do I really understand the risk
profile of the assets that are there– so the fact
that now in this service you get a full organization-wide
view of all your assets and then can begin to look
at the risk profile of each of those different
assets and kind of some of the other capabilities
where we brought on something like
security health analytics where we can say,
for those assets, are they configured in such
a way that is creating inordinate level of risk? Like, do we have storage buckets
with broad permissions that may be open to the internet? Do we have overly
permissive firewall rules? Do we have things that, again,
would put me in a situation where I would not be able to
meet the objective of a given policy? And I think back to
the automation point, something we talked about
in terms of event threat detection. How can we help automate
or at least provide some of the labor in
the detection process? So being able to take the logs– and I think we’re going
to talk about logging and monitoring
here in a minute– but being able to take that
and actually use intelligence that we’ve gathered
to give you a sense of are there particular malicious
events or signs of compromise that you need to look for. And I think the final or
a couple others I think– around identity. I think that in terms of good
governance and implementing policies, identity is still one
of the biggest control points that we have in cloud
where we may not be able to implement
some of the controls that we have wanted them
to in terms of the network, because services are moving
up the stack that we’re looking less and less at layer
3 and 4, and more and more at layer 7. How can we can really
get that level of control in the environment? And knowing who’s
in the environment, being able to manage
that access appropriately throughout its lifecycle,
right at the beginning of that, it’s authentication. So the fact that we
introduced the kind of being able to use the
Android phone as a security key, establishing that route trusted
identity at the beginning, I think is really,
really important. So those are a few
that stood out, especially within the context
of what we’re doing here in terms of being
able to implement strong controls and strong
governance across the platform and really see how
effective those are. SHAHED LATIF: Yeah. One thing that stood out to me
in addition to what you’ve just said is the training aspect. I’m glad to see there’s a
security certification now, eventually out. Because I think that’s
absolutely needed out there. ROB SADOWSKI: Yeah. I mean, if folks
didn’t see that, basically the week before we
introduced a specific security certification for GCP. And there’s a set of coursework
that you can go and get that specialization. And a lot of that is driven
by demands that we had. We want to have more education
about the different security capabilities and how
they might be applied. And so that’s a really
strong, I think, start as you begin to delve
more deeply into these issues. SHAHED LATIF: Great. ROB SADOWSKI: So
going back to you, Shahed and Gokul,
you know, you’ve gotten to the part of
advisory where people have decided to get into cloud. And they are saying, OK,
we have this workload. We have this type of data. What are they actually
looking to get in terms of pay off, right? Is it security? I mean, there’s
this belief I think that has evolved over time
that moving to the cloud can actually potentially
increase your security posture or allow you to implement
stronger or better governance. Is that part of it, or is
it still something else? SHAHED LATIF: So there
are a number of drivers. And I think it varies
according to the industry you’re in and also
the type of company you are in terms of your
style of management. But more often not, ROI is one
of the first drivers out there. And often, it’s a data
center rationalization issue. So my clients have recognized
maintaining a data center today is really
expensive, not to mention they’ve got to make it secure. So that drives them
to a conclusion, I got to move more
things into the cloud. And then the question becomes,
what workloads can I put there? Over the years,
it’s been very slow. But I worked with a fintech
client just recently. They moved their
entire production and all the entire platform
into Google Cloud and went live. And so those things
are changing. But there’s still
a bigger majority of clients that have
not yet adopted cloud. So I think there’s huge
growth potential still. So absolutely, ROI, but the
other thing is transformation. They’re eager to
leverage some of the MI and the artificial intelligence
capability within compute to really harness
business transformation. That, even more than agility,
is driving some of the adoption. And those use cases are
driving the transformation which is driving our
business as well. ROB SADOWSKI: So let me push
on that point a little bit. I mean, do you see clients who
feel that moving to the cloud either enhances their
security posture or allows them to have stronger,
more effective governance? SHAHED LATIF: Personal view,
rather than a firm view, just because I don’t want to
say too much on the firm half, but I personally believe
that from my experience– and I’ve been in security
consulting over 30 years– that going to a reputable
cloud solution such as Google is absolutely more
beneficial than trying to build it yourself. And when you think about the
complexities of the emerging tech that’s about
to hit us, you need to be with a provider that
can give you that capability. ROB SADOWSKI: OK. SHAHED LATIF: So, Rob,
how does Google initially, at the beginning when you’re
adopting Google Cloud, how do you help in
those initial stages in creating those
business cases? And how do you help to meet
their internal requirements, especially on the
first few months when they’re getting off the ground? ROB SADOWSKI: Yeah. I mean, I think that’s
a nice segue, right? Because one of the things
that we try to do is show that level of security
enhancement or security benefit. So you know, typically,
what would happen? Very early on, we
understand what are the different
types of services, or what is the workload,
or what is the application. And from there, we
understand a little bit more about the business process. We understand a little bit
more about the type of data. And then what we want to
do, back to kind of what you are saying and why I
was asking about that, is understand what are
the security policies that are around that data. And where it gets
challenging, in some cases, is if the policy isn’t
particularly well-defined or if there aren’t stated
control objectives. Because what we like to then do
is be able to give an overview or give a sense
of, OK, here’s how you can use the features in
the platform to protect this. Here’s how you can
meet your obligations from an internal policy
or also from potentially an external regulatory
perspective and the ways that we can help, right? And that’s when we talk about
the relevant certifications or the relevant documentation
we have around some of those different things. So we really like to have the
security conversation upfront, because we know that it becomes
harder and harder to drive change if there are
questions about risk, if there are questions about
being able to meet obligations and guidance and so forth. So I think that that’s
a big part of it. Another thing that we like to
do early on in the engagement is to even go far
as to do workshops. And that may be with partners
like yourselves or internally with our folks where, again,
we look at internal policies. We interview stakeholders. We understand
security objectives. We do some assessment
of existing controls. Because sometimes
the as is environment doesn’t match the
desired future state. And so we can do a
little bit of solution and designing ahead
of time, and then kind of come up with
a roadmap as to some of the things that are
must implement and then phase for other pieces. So that’s what we
try to do early on. But, clearly, security
has to be an integrated part of that design decision. I mean, just like the
philosophy of when we are doing CI/CD of
shifting security left, you’re shifting security left
in the engagement itself. So we are planning
and not saying, oh, now we have this workload. Wait, you know, how are we going
to meet the obligations that we have? SHAHED LATIF: Who is the
most engaging party when you go to a client, which
function– security, IT, business– that you feel
is most closest the top? ROB SADOWSKI: It has
to be all the above. But I mean, I think that what
we see the most is the security function is seen as the
team or the center that is going to sign off on
a lot of what happens. So that’s where a lot
of the gravity is. But, again, in doing interviews
and understanding that, we have to understand
from the business side and from other parts who
have influence over it. Because everyone
needs to be bought in. SHAHED LATIF: I don’t know how
familiar you are with the three day discovery programs I
just recently found out about that you actually spend
three days with a client and actually do workshops and
educate them around security. I don’t know if you want to
talk a little bit about that. ROB SADOWSKI: Yeah. I mean, again, it’s a kind
of workshopping process where we can do some
education about the controls and contextualize it in terms
of the existing business. SHAHED LATIF: OK. ROB SADOWSKI: Let’s see. So what else did I
want to get to here? So, again, getting more
to the specific controls and technologies, what
are the biggest challenges in terms of the technical side? What is specific security
controls or pieces of functionality are they
looking to implement and are they challenged to implement? SHAHED LATIF: So DLP was one. And that just
recently got embedded into the roadmap within there. But, again, one of my
clients was challenged. It wasn’t available at
the time, but now it is. But DLP was one challenge. And that’s really important when
you deal with piracy matters. SIM and UBA– so with insider
threat and for detection, UBA is really important,
User Behavior Analytics. And how do you use an
existing SIM platform? Using stack driver or the
Security Command Center, how do I use that with
their existing platform that they might have? Container security–
and obviously, Anthos is a great example of
bringing a multi-cloud feature to it. But managing your operations
around the new technology that brings will be interesting. Endpoint detection
and response, that’s a challenge for most
clients even on on-premise. How do you do it in a
virtual environment? And how do you get real-time
indications of what’s going on in that space? Firewall rule processes–
changes made to firewalls when you’re in a
developer environment are made very quickly. But it may not be done right. One of my other clients
that I worked with, they changed their
firewalls really quickly to the point
where it created problems. Because they’re in a dev
environment making quickly. So how do you create
controls around changes? Which sounds obvious, but I’m
talking about a live example that that wasn’t the case. And then configuration
management, that’s the other one. With so many
configurations, the ability to make so many changes,
how do you manage that? Can that be scripted? Can we control that? ROB SADOWSKI: Yeah. I mean, I think that
clearly the rapidity, the ease of making changes– which is great, because it
allows you to be responsive. But it also can cause
configuration drift and other things like that. So we try to. And that’s the
technology side thing, and it’s also a
process side thing. But it’s certainly not
something to be underestimated. And I think we see that as well. SHAHED LATIF: So maybe, what are
the native solutions within GCP that you think can address
some of the things I just brought up? ROB SADOWSKI: Yeah. I mean, clearly we
have a lot of controls that can address many of those
different types of things. You talk about behavioral
analysis and things like that. One of the things we
talked about this week was our policy intelligence
tool that we’re bringing out. And so one of the
things that we’re doing there is looking
at the set of IAMs roles that have been defined
and then looking at access that’s happening
and being able to say, is there a mismatch. So even if a role is
defined correctly, we can say this group
of users have not accessed this resource. Should we remove
this permission– so getting to more
effective governance based on user behavior and
looking at that over time. I think, as you mentioned
container security, there are a number of
capabilities we talked about. We talked this week about what
we are doing round container registry vulnerability
scanning, so being able to scan
for vulnerabilities in the development process
and provide attestations of whether particular components
or other underlying things have vulnerabilities in them. And then very much related to
that is a tool that is now GA called binary authorization–
so, in that process, making sure that code is
signed and checked through based on a variety of
criteria before it actually goes to the deploy state. So this whole idea of giving
you tooling to control and get oversight over your
software supply chain is critically important
as organizations are going to transform
and modernize some of these dev processes. And those are two
capabilities that are kind of built in there. SHAHED LATIF: So let’s
take a pause for a minute. Because Gokul’s been
quiet on the end, and I want to bring him in. So I want a deep dive
a little bit more into identity access
management, because I think it’s a core
capability that needs to be really thought through. Maybe, Gokul, I’m
going to share a slide that you can maybe illustrate. Tell us a little
bit more about what do you need to have to have
an effective IAM initiative. GOKUL RAGHURAMAN: Sure. So IAM is one of the core tenets
of when we go into a customer site and they want to build
out their cloud transformation program. IAM is one of the core
foundational capabilities that we start with. So if you don’t get the IAM and
the resource hierarchy right, then it causes a
bigger challenge as you start scaling
up your business. So some of the basic
tenets, as we look at IAM, is how do you organize
your resources in Google? What are the different
capability services that are available out of the box? What considerations do you
need to be thinking about as you organize your projects? And your folder structure
and your resource structure in your Google environment
is the first thing that we look at. So what do I mean by that? Google has a resource
hierarchy which starts with
organization resource, and then there’s folders. And then there’s projects. And then there’s
underlying resources to compute your network,
all those resources. So typically, when we
go in at the root level, organization is the node that
customers typically start with, right? And if you have a G Suite
account or an Cloud Identity account, that organization
resource is available to you. The advantage of
that resource is you can now start
tying your policies and your controls at the
organizational level. In some cases, it’s
actually beneficial. You don’t have to duplicate
them across multiple projects. So that’s the root
level that you get. And typically, once you start
at the organization level, any projects that you create are
then automatically associated to that domain and then,
as part of that, associated to your organization. So if somebody leaves
your organization, you already have that
associated to your organization. It follows the
organization and not the individual who’s actually
leaving the organization. So from an administrative
perspective, it’s centralized management. It allows you to manage your
resources and your controls and the policies in
a centralized manner. So that’s why having that
organizational resource helps you set the precedents. So then we have this concept
of folders resources. So a lot of times
our customers want to segregate their projects
by functional units, by departments, by environment. So that’s where the concept
of folders is very helpful. So I might have a
marketing department. I might have another
legal department. I want specific policies
associated to that. I’m want projects which
have specific controls, IAM policies related only for
that particular department, right? So that’s where I use
the concept of folders to groups similar
functional units or similar functions together
as part of the folder structure. Then you have the
actual project, which derives from the folder, right? And folder is an optional level. You don’t have to have
that folder structure, but we typically
recommend that if you want a bunch or group together
similar functional units. At the project level, that’s
where the real work actually happens. That’s when you stand
up your resources. You spin up your applications. That’s where you deploy
everything, right? So the project is the primary
control that you have. Now, once you have set up
this organizational structure in place, typically
our customers say, so what is the
recommended project structure that I need to have? Do I need to have
it by environment? Do I need to have
it by department? Again, typical answer
is it depends, right? So if you are an
organization, unless you have very specific requirements– so let’s say I have
an organization. And there is
another organization which has completely different
set of policies and functions, and I want complete isolation. That’s when you really need
two separate organizations. Otherwise, one organization
is the way to go. One domain is the way to go. And then from a
project level, that’s where you start
segregating projects. So I could have projects created
for dev environment, for my QA, or my production environment. Or I could also slice
it in a different way, say I need my project
structure to be set up by my functional units. My marketing, you have a certain
set of project hierarchy. You have certain users,
roles, policies assigned by that functional unit. So that’s the way that
we typically look at it. So once you get
that right, then you start thinking about the
concept of identity and roles. So really there is two
kinds of identity in GCP. One is the user identity. And then there is a
service account identity. So users are pretty
much all your users, the individual
users who are going to be using the environment. So that’s straightforward. The service accounts
are slightly tricky. So they are
typically used if you want to perform an operation
on behalf of another user, on behalf of an application. That’s when service
accounts are typically used. You can use it as an identity
as well as a resource. So what I mean by
that is so let’s say I want to have some
kind of a notification or an automated job
that I want to be running in the background. I don’t want to be running
it as an individual user. That’s when the concept
of service account actually applies. So it’s very critical
at a high level, before you start moving all
your workloads to the cloud or before you set up your
workloads on the cloud, to ensure that you have the
IAM structure built out, you have your users, roles,
and policies that are clearly mapped out, and then you
start migrating your data and set up your
projects in place. So, Rob, anything
specific from an identity perspective that
Google is working on, any of the enhancements? ROB SADOWSKI: Well, I think
you gave a good overview here. I think one thing that
I talked about in terms of policy intelligence,
I talked a little bit about making recommendations. There is also a
complementary piece called Access Troubleshooter
that, as you set up roles and as you set up permissions,
if something isn’t able to be accessed,
Troubleshooter can help you walk through
and figure out exactly why. So you don’t wind up saying just
grant all privileges, right? So you can continue to adhere
to those particular things. And then we’ve made a number
of other announcements around our own cloud
identity, where a lot of times we set up those accounts, or the
ability to use Active Directory and use that as a
managed service. So those were a
couple of things, but I think especially
in terms of what we doing around
policy intelligence, helping you get
those permissions and helping you make sure that
you are continuing to enforce things like lease
privilege effectively across multiple things. GOKUL RAGHURAMAN: That’s
actually very good, because what we’ve noticed is a
lot of the times our users have set up there predefined roles
and the users are already set up. But a lot of times, they’re
not even using those roles. So the idea is to have
fine-grained roles and policies associated to that. A lot of our clients start with
the traditional primitive rules which are no longer
recommended, right? So then you have the
predefined roles, which gives you those bunching
of those policies, which is good. But really the
custom rolls is where you get the full power
of Google, right? You’ll be able to automate
exactly what permission set a particular user
is required to use. With this new implementation
that Google has come out with, you already have a way
that Google is going to tell you, hey, you have this user. You have assigned
these permissions, but it’s not required. You’re not using them,
so start taking it back. So that’s very helpful. SHAHED LATIF: Go ahead. ROB SADOWSKI: Yeah. I mean, I think the
other analog Shahed is so now we understand and
are trying to gain control over those users, but we also
have to audit that activity. And so monitoring and
logging is something, Shahed, you know, you get referred
to at the beginning. What are your kind of best
practices or thinking around that as you work with clients
around monitoring, logging, and being able to provide
that documentation? GOKUL RAGHURAMAN: Absolutely. So traditionally,
all of our customers have some kind of a logging and
monitoring solution on-prem, right? So they want to leverage
the same solution as they go to the cloud as well. And a lot of times,
that will work. But by default, Google
already provides you capabilities like
Stackdriver, which will let you look at the logs. The logs are
automatically enabled. They’re automatically
stored in Stackdriver. And you have a way to collect
it, export it, and then do some analytics on top of it. So at the base
infrastructure layer level, you already have tools
like Stackdriver, which will let you do that. Then you’ll need
to start looking at what kind of
application level logs and monitoring that
I’ll have to manage. And then finally, you look at
it from a business transparency perspective. What are the business
metrics and the logs that I need to be
starting to collect? So those are the three
levels that I would typically start looking at. The other good thing
in Google, and I know you can talk a little bit
about the access transparency, but really when we
go to a customer– and it goes back to how
we structure our projects. So typically, we would have
security SecOps engineering team, which is focused on
primarily looking at the audit events, looking at the security
events that are happening. So the way that you
structure your projects, again, is very critical. So you would have
your engineering teams focus on a separate
project structure. All your security logging
and all those logs will be managed on
a separate project. And only your security
engineering team gets access to that. So that is clear
separation of duties. You have a way to monitor
who gets access to the logs. And you can it report it on if
there is unauthorized access to the logs. Again, some of the things that
we have seen as best practice is go with fine-grained roles. Give absolutely
minimum permissions. Start with the editor
role, Because editor is required in a
lot of cases for you to be able to manage
the logs and do some kind of instrumentation
from the logs. So start there, and then
expand from those capabilities. By default, also, another
question that comes up is our customers ask, I
stored all these logs. Are these logs encrypted? Are they stored
in a safe manner? By default, when they are stored
in your cloud storage buckets, they are all
encrypted by default. You have a way to actually
use your secret management capabilities to manage
access to those logs. So those are some of the
high level best practices that we recommend as we look
at logging and monitoring as we work with our clients. Do you want to talk a little bit
about what Google has provided? ROB SADOWSKI: Yeah. I mean, I think
that just one thing you talked about, separation
of logs by project and being able to get
that granularity– I think everyone,
if you’re not aware, there are actually when we think
about Cloud Audit Logging three distinct set of logs. There are admin
activity logs, which allow you to see what
administrators are doing. There’s system event logs which,
if you’re starting something on Compute Engine, what
are the different things? Like if a live
migration happens, when did that
happen, for example? There are also data access logs. So you can see not only from
an administrative perspective, but also from a user
perspective, people who have roles or
permissions, when that data has been accessed. So there’s a nice real
separation between those. So depending on what
you’re looking for and what you are going to have
to provide documentation on, the different types of
logs can support that. And then we also augment. You know, that’s
for your own users and your own administrators. But we augment that with
access transparency logs. So this is an optional
feature that you can turn on, which allows you to see if
Google support engineers or administrators are
accessing any of your data. So that helps with the
overall understanding of what’s happening
in my environment, being able to provide a full
log or near full log of access of that. And so that kind of
complements that piece. And another thing
we talked about– that many organizations
have existing log management systems and capabilities,
things like a Splunk or other systems like that in
Cloud Security Command Center with actually added
capabilities in the GA release to support direct connection
to that with this Splunk connector. So if that’s where
you’re doing some of the reporting and auditing,
you can get events from there. So yeah– I mean, clearly
an area that we have focus. So maybe to wrap
this up, Shahed, are there specific tools or
frameworks or other things that you use specifically
with your clients in driving through
some of these issues? SHAHED LATIF: Sure. I’ve just summarized them
here quickly on the slide to speed up our discussion. So as I mentioned before,
and we gave a very high level representation on
the earliest slides, but we created an
architectural blueprint. We’ve actually
worked with clients where they’ve asked
us, tell us what good looks like from an
architectural perspective. So we already have pre-designed
templates around what architecture should look like. We then tailor that to
the needs of the client. And that’s been a
huge success for us in trying to
accelerate the program. And that, again– process,
controls, and technology, we look at all three elements. We also make sure
that the controls that we apply distinguish
between what’s available in Google
and what’s not. And that’s really
important to know, because we’ve also developed
an understanding of the vendor ecosystem. A lot of our clients say,
what is the best tool to use? What’s the right
thing to use here? We developed a point
of view on that. We’ve developed a
selection of criteria against different
functions and then a [INAUDIBLE] analysis around
that and developed a portfolio. It’s hard to keep
up, because there’s a lot of new third party
products out on the market. But we’ve got that as
well at our disposal. The other thing we have
is we’ve developed a tool. Because, often, we can put a
tool that’s already available, but it takes time
to put that there. So we developed some
scripts of our own based, again, on
benchmarks that are available in the
industry, CIS namely. And we run our
scripts and quickly find out whether the
things are hardened the right way in the VM level. And then finally,
best practices– because we’ve been
to a few clients now, we’ve captured best practices. So we don’t have to
repeat the best practices. It’s hard to keep up
with all the changes that are going on with Google. But as we do, we keep
updating for that as well. ROB SADOWSKI: So we’re
almost done here. You know, Gokul
or Shahed, if you had to make a couple of key
takeaways or recommendations around this and architecting,
you know, what would they be? SHAHED LATIF: I think it’s
these four, five basic things. Understand your
regulatory environment. And understand it well. And what does that
really mean for you? Assess the key risks. I didn’t touch about
threat modeling. But threat modeling is
essential to understand what the new attack
services that the cloud can bring into you. Once you understand
those things, you’ve got the baselines
of understanding. Start to implement the
right architecture. Use pre-tested
architecture blueprints. Don’t make it up. And then identify the
third party solutions to augment your security. Google Cloud does
provide a lot, but it doesn’t provide everything. So make sure you
have that identified. And then have monitor and
audit directly addressed. If you address
these, you are well on the way to a secure cloud. ROB SADOWSKI: All right, great. Well, thank you all for coming
and listening to our dialogue. And thank our panelists. And have a good rest of
Next, not too much left. [APPLAUSE] [MUSIC PLAYING]

1 Comment

One Reply to “Optimize GCP’s Security Architecture for Maximum Protection (Cloud Next ’19)”

  1. Ravi Chandra says:

    Marketing jimmik telling us to go for certification. Nothing informative

Leave a Comment

Your email address will not be published. Required fields are marked *